00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 223 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3724 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.279 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.279 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.335 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.346 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.358 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.358 > git config core.sparsecheckout # timeout=10 00:00:08.368 > git read-tree -mu HEAD # timeout=10 00:00:08.386 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.421 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.421 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.597 [Pipeline] Start of Pipeline 00:00:08.608 [Pipeline] library 00:00:08.610 Loading library shm_lib@master 00:00:08.610 Library shm_lib@master is cached. Copying from home. 00:00:08.621 [Pipeline] node 00:00:08.633 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:08.635 [Pipeline] { 00:00:08.642 [Pipeline] catchError 00:00:08.643 [Pipeline] { 00:00:08.654 [Pipeline] wrap 00:00:08.663 [Pipeline] { 00:00:08.671 [Pipeline] stage 00:00:08.673 [Pipeline] { (Prologue) 00:00:08.689 [Pipeline] echo 00:00:08.691 Node: VM-host-SM0 00:00:08.697 [Pipeline] cleanWs 00:00:08.708 [WS-CLEANUP] Deleting project workspace... 00:00:08.708 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.715 [WS-CLEANUP] done 00:00:08.924 [Pipeline] setCustomBuildProperty 00:00:09.019 [Pipeline] httpRequest 00:00:09.377 [Pipeline] echo 00:00:09.378 Sorcerer 10.211.164.20 is alive 00:00:09.385 [Pipeline] retry 00:00:09.387 [Pipeline] { 00:00:09.398 [Pipeline] httpRequest 00:00:09.402 HttpMethod: GET 00:00:09.403 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.403 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.418 Response Code: HTTP/1.1 200 OK 00:00:09.419 Success: Status code 200 is in the accepted range: 200,404 00:00:09.419 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.610 [Pipeline] } 00:00:29.621 [Pipeline] // retry 00:00:29.627 [Pipeline] sh 00:00:29.901 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.917 [Pipeline] httpRequest 00:00:30.318 [Pipeline] echo 00:00:30.319 Sorcerer 10.211.164.20 is alive 00:00:30.328 [Pipeline] retry 00:00:30.330 [Pipeline] { 00:00:30.344 [Pipeline] httpRequest 00:00:30.348 HttpMethod: GET 00:00:30.349 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:30.350 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:30.361 Response Code: HTTP/1.1 200 OK 00:00:30.362 Success: Status code 200 is in the accepted range: 200,404 00:00:30.362 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:41.714 [Pipeline] } 00:01:41.732 [Pipeline] // retry 00:01:41.740 [Pipeline] sh 00:01:42.022 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:44.566 [Pipeline] sh 00:01:44.846 + git -C spdk log --oneline -n5 00:01:44.846 b18e1bd62 version: v24.09.1-pre 00:01:44.846 19524ad45 version: v24.09 00:01:44.846 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:44.846 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:44.846 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:44.864 [Pipeline] withCredentials 00:01:44.874 > git --version # timeout=10 00:01:44.888 > git --version # 'git version 2.39.2' 00:01:44.904 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:44.906 [Pipeline] { 00:01:44.916 [Pipeline] retry 00:01:44.918 [Pipeline] { 00:01:44.934 [Pipeline] sh 00:01:45.215 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:45.227 [Pipeline] } 00:01:45.245 [Pipeline] // retry 00:01:45.250 [Pipeline] } 00:01:45.266 [Pipeline] // withCredentials 00:01:45.276 [Pipeline] httpRequest 00:01:45.673 [Pipeline] echo 00:01:45.675 Sorcerer 10.211.164.20 is alive 00:01:45.685 [Pipeline] retry 00:01:45.687 [Pipeline] { 00:01:45.701 [Pipeline] httpRequest 00:01:45.712 HttpMethod: GET 00:01:45.712 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:45.713 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:45.725 Response Code: HTTP/1.1 200 OK 00:01:45.725 Success: Status code 200 is in the accepted range: 200,404 00:01:45.726 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.657 [Pipeline] } 00:02:00.673 [Pipeline] // retry 00:02:00.681 [Pipeline] sh 00:02:00.961 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.346 [Pipeline] sh 00:02:02.627 + git -C dpdk log --oneline -n5 00:02:02.627 eeb0605f11 version: 23.11.0 00:02:02.627 238778122a doc: update release notes for 23.11 00:02:02.627 46aa6b3cfc doc: fix description of RSS features 00:02:02.627 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:02.627 7e421ae345 devtools: support skipping forbid rule check 00:02:02.644 [Pipeline] writeFile 00:02:02.658 [Pipeline] sh 00:02:02.940 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:02.951 [Pipeline] sh 00:02:03.327 + cat autorun-spdk.conf 00:02:03.327 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.327 SPDK_TEST_NVMF=1 00:02:03.327 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.327 SPDK_TEST_VFIOUSER=1 00:02:03.327 SPDK_TEST_USDT=1 00:02:03.327 SPDK_RUN_UBSAN=1 00:02:03.327 SPDK_TEST_NVMF_MDNS=1 00:02:03.327 NET_TYPE=virt 00:02:03.327 SPDK_JSONRPC_GO_CLIENT=1 00:02:03.327 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.327 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.327 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.334 RUN_NIGHTLY=1 00:02:03.336 [Pipeline] } 00:02:03.350 [Pipeline] // stage 00:02:03.364 [Pipeline] stage 00:02:03.366 [Pipeline] { (Run VM) 00:02:03.378 [Pipeline] sh 00:02:03.659 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:03.659 + echo 'Start stage prepare_nvme.sh' 00:02:03.659 Start stage prepare_nvme.sh 00:02:03.659 + [[ -n 7 ]] 00:02:03.659 + disk_prefix=ex7 00:02:03.659 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:02:03.659 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:02:03.659 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:02:03.659 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.659 ++ SPDK_TEST_NVMF=1 00:02:03.659 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.659 ++ SPDK_TEST_VFIOUSER=1 00:02:03.659 ++ SPDK_TEST_USDT=1 00:02:03.659 ++ SPDK_RUN_UBSAN=1 00:02:03.659 ++ SPDK_TEST_NVMF_MDNS=1 00:02:03.659 ++ NET_TYPE=virt 00:02:03.659 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:03.659 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.659 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.659 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.659 ++ RUN_NIGHTLY=1 00:02:03.659 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:03.659 + nvme_files=() 00:02:03.659 + declare -A nvme_files 00:02:03.659 + backend_dir=/var/lib/libvirt/images/backends 00:02:03.659 + nvme_files['nvme.img']=5G 00:02:03.659 + nvme_files['nvme-cmb.img']=5G 00:02:03.659 + nvme_files['nvme-multi0.img']=4G 00:02:03.659 + nvme_files['nvme-multi1.img']=4G 00:02:03.659 + nvme_files['nvme-multi2.img']=4G 00:02:03.659 + nvme_files['nvme-openstack.img']=8G 00:02:03.659 + nvme_files['nvme-zns.img']=5G 00:02:03.659 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:03.659 + (( SPDK_TEST_FTL == 1 )) 00:02:03.659 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:03.659 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.659 + for nvme in "${!nvme_files[@]}" 00:02:03.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:03.659 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.918 + for nvme in "${!nvme_files[@]}" 00:02:03.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:03.918 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.918 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:03.918 + echo 'End stage prepare_nvme.sh' 00:02:03.918 End stage prepare_nvme.sh 00:02:03.930 [Pipeline] sh 00:02:04.214 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:04.214 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:02:04.214 00:02:04.214 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:02:04.214 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:02:04.214 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:04.214 HELP=0 00:02:04.214 DRY_RUN=0 00:02:04.214 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:02:04.214 NVME_DISKS_TYPE=nvme,nvme, 00:02:04.214 NVME_AUTO_CREATE=0 00:02:04.214 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:02:04.214 NVME_CMB=,, 00:02:04.214 NVME_PMR=,, 00:02:04.214 NVME_ZNS=,, 00:02:04.214 NVME_MS=,, 00:02:04.214 NVME_FDP=,, 00:02:04.214 SPDK_VAGRANT_DISTRO=fedora39 00:02:04.214 SPDK_VAGRANT_VMCPU=10 00:02:04.214 SPDK_VAGRANT_VMRAM=12288 00:02:04.214 SPDK_VAGRANT_PROVIDER=libvirt 00:02:04.214 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:04.214 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:04.214 SPDK_OPENSTACK_NETWORK=0 00:02:04.214 VAGRANT_PACKAGE_BOX=0 00:02:04.214 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:04.214 FORCE_DISTRO=true 00:02:04.214 VAGRANT_BOX_VERSION= 00:02:04.214 EXTRA_VAGRANTFILES= 00:02:04.214 NIC_MODEL=e1000 00:02:04.214 00:02:04.214 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:02:04.214 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:06.747 Bringing machine 'default' up with 'libvirt' provider... 00:02:07.683 ==> default: Creating image (snapshot of base box volume). 00:02:07.683 ==> default: Creating domain with the following settings... 00:02:07.683 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734200543_af6440f5c99e7e073d8d 00:02:07.683 ==> default: -- Domain type: kvm 00:02:07.683 ==> default: -- Cpus: 10 00:02:07.683 ==> default: -- Feature: acpi 00:02:07.683 ==> default: -- Feature: apic 00:02:07.683 ==> default: -- Feature: pae 00:02:07.683 ==> default: -- Memory: 12288M 00:02:07.683 ==> default: -- Memory Backing: hugepages: 00:02:07.683 ==> default: -- Management MAC: 00:02:07.683 ==> default: -- Loader: 00:02:07.683 ==> default: -- Nvram: 00:02:07.683 ==> default: -- Base box: spdk/fedora39 00:02:07.683 ==> default: -- Storage pool: default 00:02:07.683 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734200543_af6440f5c99e7e073d8d.img (20G) 00:02:07.683 ==> default: -- Volume Cache: default 00:02:07.683 ==> default: -- Kernel: 00:02:07.683 ==> default: -- Initrd: 00:02:07.683 ==> default: -- Graphics Type: vnc 00:02:07.683 ==> default: -- Graphics Port: -1 00:02:07.683 ==> default: -- Graphics IP: 127.0.0.1 00:02:07.683 ==> default: -- Graphics Password: Not defined 00:02:07.683 ==> default: -- Video Type: cirrus 00:02:07.683 ==> default: -- Video VRAM: 9216 00:02:07.683 ==> default: -- Sound Type: 00:02:07.683 ==> default: -- Keymap: en-us 00:02:07.683 ==> default: -- TPM Path: 00:02:07.683 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:07.683 ==> default: -- Command line args: 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:07.683 ==> default: -> value=-drive, 00:02:07.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:07.683 ==> default: -> value=-drive, 00:02:07.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.683 ==> default: -> value=-drive, 00:02:07.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.683 ==> default: -> value=-drive, 00:02:07.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:07.683 ==> default: -> value=-device, 00:02:07.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.942 ==> default: Creating shared folders metadata... 00:02:07.942 ==> default: Starting domain. 00:02:09.854 ==> default: Waiting for domain to get an IP address... 00:02:27.956 ==> default: Waiting for SSH to become available... 00:02:27.956 ==> default: Configuring and enabling network interfaces... 00:02:32.155 default: SSH address: 192.168.121.141:22 00:02:32.155 default: SSH username: vagrant 00:02:32.155 default: SSH auth method: private key 00:02:34.058 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:42.176 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:47.444 ==> default: Mounting SSHFS shared folder... 00:02:49.346 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:49.346 ==> default: Checking Mount.. 00:02:50.721 ==> default: Folder Successfully Mounted! 00:02:50.721 ==> default: Running provisioner: file... 00:02:51.293 default: ~/.gitconfig => .gitconfig 00:02:51.865 00:02:51.865 SUCCESS! 00:02:51.865 00:02:51.865 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.865 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.865 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:51.865 00:02:51.874 [Pipeline] } 00:02:51.888 [Pipeline] // stage 00:02:51.897 [Pipeline] dir 00:02:51.897 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:51.899 [Pipeline] { 00:02:51.911 [Pipeline] catchError 00:02:51.913 [Pipeline] { 00:02:51.925 [Pipeline] sh 00:02:52.205 + vagrant ssh-config --host vagrant 00:02:52.205 + sed -ne /^Host/,$p 00:02:52.205 + tee ssh_conf 00:02:55.494 Host vagrant 00:02:55.494 HostName 192.168.121.141 00:02:55.494 User vagrant 00:02:55.494 Port 22 00:02:55.494 UserKnownHostsFile /dev/null 00:02:55.494 StrictHostKeyChecking no 00:02:55.494 PasswordAuthentication no 00:02:55.494 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:55.494 IdentitiesOnly yes 00:02:55.494 LogLevel FATAL 00:02:55.494 ForwardAgent yes 00:02:55.494 ForwardX11 yes 00:02:55.494 00:02:55.507 [Pipeline] withEnv 00:02:55.510 [Pipeline] { 00:02:55.523 [Pipeline] sh 00:02:55.803 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:55.803 source /etc/os-release 00:02:55.803 [[ -e /image.version ]] && img=$(< /image.version) 00:02:55.803 # Minimal, systemd-like check. 00:02:55.803 if [[ -e /.dockerenv ]]; then 00:02:55.803 # Clear garbage from the node's name: 00:02:55.803 # agt-er_autotest_547-896 -> autotest_547-896 00:02:55.803 # $HOSTNAME is the actual container id 00:02:55.803 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:55.803 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:55.803 # We can assume this is a mount from a host where container is running, 00:02:55.803 # so fetch its hostname to easily identify the target swarm worker. 00:02:55.803 container="$(< /etc/hostname) ($agent)" 00:02:55.803 else 00:02:55.803 # Fallback 00:02:55.803 container=$agent 00:02:55.803 fi 00:02:55.803 fi 00:02:55.803 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:55.803 00:02:56.074 [Pipeline] } 00:02:56.090 [Pipeline] // withEnv 00:02:56.099 [Pipeline] setCustomBuildProperty 00:02:56.114 [Pipeline] stage 00:02:56.116 [Pipeline] { (Tests) 00:02:56.133 [Pipeline] sh 00:02:56.414 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:56.687 [Pipeline] sh 00:02:56.966 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:57.239 [Pipeline] timeout 00:02:57.240 Timeout set to expire in 1 hr 0 min 00:02:57.242 [Pipeline] { 00:02:57.256 [Pipeline] sh 00:02:57.536 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:58.103 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:58.115 [Pipeline] sh 00:02:58.395 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:58.668 [Pipeline] sh 00:02:58.963 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:59.021 [Pipeline] sh 00:02:59.301 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:59.561 ++ readlink -f spdk_repo 00:02:59.561 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:59.561 + [[ -n /home/vagrant/spdk_repo ]] 00:02:59.561 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:59.561 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:59.561 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:59.561 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:59.561 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:59.561 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:59.561 + cd /home/vagrant/spdk_repo 00:02:59.561 + source /etc/os-release 00:02:59.561 ++ NAME='Fedora Linux' 00:02:59.561 ++ VERSION='39 (Cloud Edition)' 00:02:59.561 ++ ID=fedora 00:02:59.561 ++ VERSION_ID=39 00:02:59.561 ++ VERSION_CODENAME= 00:02:59.561 ++ PLATFORM_ID=platform:f39 00:02:59.561 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:59.561 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.561 ++ LOGO=fedora-logo-icon 00:02:59.561 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:59.561 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.561 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:59.561 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.561 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.561 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.561 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:59.561 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.561 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:59.561 ++ SUPPORT_END=2024-11-12 00:02:59.561 ++ VARIANT='Cloud Edition' 00:02:59.561 ++ VARIANT_ID=cloud 00:02:59.561 + uname -a 00:02:59.561 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:59.561 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:00.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:00.129 Hugepages 00:03:00.129 node hugesize free / total 00:03:00.129 node0 1048576kB 0 / 0 00:03:00.129 node0 2048kB 0 / 0 00:03:00.129 00:03:00.129 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:00.129 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:00.129 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:00.129 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:00.129 + rm -f /tmp/spdk-ld-path 00:03:00.129 + source autorun-spdk.conf 00:03:00.129 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.129 ++ SPDK_TEST_NVMF=1 00:03:00.129 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:00.129 ++ SPDK_TEST_VFIOUSER=1 00:03:00.129 ++ SPDK_TEST_USDT=1 00:03:00.129 ++ SPDK_RUN_UBSAN=1 00:03:00.129 ++ SPDK_TEST_NVMF_MDNS=1 00:03:00.129 ++ NET_TYPE=virt 00:03:00.129 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:00.129 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:00.129 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:00.129 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:00.129 ++ RUN_NIGHTLY=1 00:03:00.129 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:00.129 + [[ -n '' ]] 00:03:00.129 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:00.129 + for M in /var/spdk/build-*-manifest.txt 00:03:00.129 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:00.129 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:00.129 + for M in /var/spdk/build-*-manifest.txt 00:03:00.129 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:00.129 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:00.129 + for M in /var/spdk/build-*-manifest.txt 00:03:00.129 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:00.129 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:00.129 ++ uname 00:03:00.129 + [[ Linux == \L\i\n\u\x ]] 00:03:00.129 + sudo dmesg -T 00:03:00.129 + sudo dmesg --clear 00:03:00.129 + dmesg_pid=6000 00:03:00.129 + [[ Fedora Linux == FreeBSD ]] 00:03:00.129 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:00.129 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:00.129 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:00.129 + sudo dmesg -Tw 00:03:00.129 + [[ -x /usr/src/fio-static/fio ]] 00:03:00.129 + export FIO_BIN=/usr/src/fio-static/fio 00:03:00.129 + FIO_BIN=/usr/src/fio-static/fio 00:03:00.129 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:00.129 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:00.129 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:00.129 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:00.129 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:00.129 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:00.129 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:00.129 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:00.129 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:00.129 Test configuration: 00:03:00.129 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.129 SPDK_TEST_NVMF=1 00:03:00.129 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:00.129 SPDK_TEST_VFIOUSER=1 00:03:00.129 SPDK_TEST_USDT=1 00:03:00.129 SPDK_RUN_UBSAN=1 00:03:00.129 SPDK_TEST_NVMF_MDNS=1 00:03:00.129 NET_TYPE=virt 00:03:00.129 SPDK_JSONRPC_GO_CLIENT=1 00:03:00.129 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:00.129 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:00.129 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:00.389 RUN_NIGHTLY=1 18:23:15 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:00.389 18:23:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:00.389 18:23:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:00.389 18:23:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:00.389 18:23:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.389 18:23:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.389 18:23:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.389 18:23:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.389 18:23:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.389 18:23:15 -- paths/export.sh@5 -- $ export PATH 00:03:00.389 18:23:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.389 18:23:15 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:00.389 18:23:15 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:00.389 18:23:15 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734200595.XXXXXX 00:03:00.389 18:23:15 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734200595.oNnxaq 00:03:00.389 18:23:15 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:00.389 18:23:15 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:03:00.389 18:23:15 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:00.389 18:23:15 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:00.389 18:23:15 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:00.389 18:23:15 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:00.389 18:23:15 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:00.389 18:23:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:00.389 18:23:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.389 18:23:15 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:03:00.389 18:23:15 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:00.389 18:23:15 -- pm/common@17 -- $ local monitor 00:03:00.389 18:23:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.389 18:23:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.389 18:23:15 -- pm/common@25 -- $ sleep 1 00:03:00.389 18:23:15 -- pm/common@21 -- $ date +%s 00:03:00.389 18:23:15 -- pm/common@21 -- $ date +%s 00:03:00.389 18:23:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734200595 00:03:00.389 18:23:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734200595 00:03:00.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734200595_collect-cpu-load.pm.log 00:03:00.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734200595_collect-vmstat.pm.log 00:03:01.326 18:23:16 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:01.326 18:23:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:01.326 18:23:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:01.326 18:23:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:01.326 18:23:16 -- spdk/autobuild.sh@16 -- $ date -u 00:03:01.326 Sat Dec 14 06:23:16 PM UTC 2024 00:03:01.326 18:23:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:01.326 v24.09-1-gb18e1bd62 00:03:01.326 18:23:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:01.326 18:23:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:01.326 18:23:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:01.326 18:23:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:01.326 18:23:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:01.326 18:23:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.326 ************************************ 00:03:01.326 START TEST ubsan 00:03:01.326 ************************************ 00:03:01.326 using ubsan 00:03:01.326 18:23:16 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:01.326 00:03:01.326 real 0m0.001s 00:03:01.326 user 0m0.000s 00:03:01.326 sys 0m0.000s 00:03:01.326 18:23:16 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.326 18:23:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:01.326 ************************************ 00:03:01.326 END TEST ubsan 00:03:01.326 ************************************ 00:03:01.326 18:23:17 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:01.326 18:23:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:01.326 18:23:17 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:01.326 18:23:17 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:01.326 18:23:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:01.326 18:23:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.326 ************************************ 00:03:01.326 START TEST build_native_dpdk 00:03:01.326 ************************************ 00:03:01.326 18:23:17 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:01.326 18:23:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:01.585 eeb0605f11 version: 23.11.0 00:03:01.585 238778122a doc: update release notes for 23.11 00:03:01.586 46aa6b3cfc doc: fix description of RSS features 00:03:01.586 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:01.586 7e421ae345 devtools: support skipping forbid rule check 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:01.586 patching file config/rte_config.h 00:03:01.586 Hunk #1 succeeded at 60 (offset 1 line). 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:01.586 patching file lib/pcapng/rte_pcapng.c 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:01.586 18:23:17 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:01.586 18:23:17 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.857 The Meson build system 00:03:06.857 Version: 1.5.0 00:03:06.857 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:06.857 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:06.857 Build type: native build 00:03:06.857 Program cat found: YES (/usr/bin/cat) 00:03:06.857 Project name: DPDK 00:03:06.857 Project version: 23.11.0 00:03:06.857 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.857 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:06.857 Host machine cpu family: x86_64 00:03:06.857 Host machine cpu: x86_64 00:03:06.857 Message: ## Building in Developer Mode ## 00:03:06.857 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:06.857 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:06.857 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:06.857 Program python3 found: YES (/usr/bin/python3) 00:03:06.857 Program cat found: YES (/usr/bin/cat) 00:03:06.857 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:06.857 Compiler for C supports arguments -march=native: YES 00:03:06.857 Checking for size of "void *" : 8 00:03:06.857 Checking for size of "void *" : 8 (cached) 00:03:06.857 Library m found: YES 00:03:06.857 Library numa found: YES 00:03:06.857 Has header "numaif.h" : YES 00:03:06.857 Library fdt found: NO 00:03:06.857 Library execinfo found: NO 00:03:06.857 Has header "execinfo.h" : YES 00:03:06.857 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.857 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:06.857 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:06.857 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:06.857 Run-time dependency openssl found: YES 3.1.1 00:03:06.857 Run-time dependency libpcap found: YES 1.10.4 00:03:06.857 Has header "pcap.h" with dependency libpcap: YES 00:03:06.857 Compiler for C supports arguments -Wcast-qual: YES 00:03:06.857 Compiler for C supports arguments -Wdeprecated: YES 00:03:06.857 Compiler for C supports arguments -Wformat: YES 00:03:06.857 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:06.857 Compiler for C supports arguments -Wformat-security: NO 00:03:06.857 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.857 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:06.857 Compiler for C supports arguments -Wnested-externs: YES 00:03:06.857 Compiler for C supports arguments -Wold-style-definition: YES 00:03:06.857 Compiler for C supports arguments -Wpointer-arith: YES 00:03:06.857 Compiler for C supports arguments -Wsign-compare: YES 00:03:06.857 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:06.857 Compiler for C supports arguments -Wundef: YES 00:03:06.857 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.857 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:06.857 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:06.857 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.857 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:06.857 Program objdump found: YES (/usr/bin/objdump) 00:03:06.857 Compiler for C supports arguments -mavx512f: YES 00:03:06.857 Checking if "AVX512 checking" compiles: YES 00:03:06.857 Fetching value of define "__SSE4_2__" : 1 00:03:06.857 Fetching value of define "__AES__" : 1 00:03:06.857 Fetching value of define "__AVX__" : 1 00:03:06.857 Fetching value of define "__AVX2__" : 1 00:03:06.857 Fetching value of define "__AVX512BW__" : (undefined) 00:03:06.857 Fetching value of define "__AVX512CD__" : (undefined) 00:03:06.857 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:06.857 Fetching value of define "__AVX512F__" : (undefined) 00:03:06.857 Fetching value of define "__AVX512VL__" : (undefined) 00:03:06.857 Fetching value of define "__PCLMUL__" : 1 00:03:06.857 Fetching value of define "__RDRND__" : 1 00:03:06.857 Fetching value of define "__RDSEED__" : 1 00:03:06.857 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:06.857 Fetching value of define "__znver1__" : (undefined) 00:03:06.857 Fetching value of define "__znver2__" : (undefined) 00:03:06.857 Fetching value of define "__znver3__" : (undefined) 00:03:06.857 Fetching value of define "__znver4__" : (undefined) 00:03:06.857 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:06.857 Message: lib/log: Defining dependency "log" 00:03:06.857 Message: lib/kvargs: Defining dependency "kvargs" 00:03:06.857 Message: lib/telemetry: Defining dependency "telemetry" 00:03:06.857 Checking for function "getentropy" : NO 00:03:06.857 Message: lib/eal: Defining dependency "eal" 00:03:06.857 Message: lib/ring: Defining dependency "ring" 00:03:06.857 Message: lib/rcu: Defining dependency "rcu" 00:03:06.857 Message: lib/mempool: Defining dependency "mempool" 00:03:06.857 Message: lib/mbuf: Defining dependency "mbuf" 00:03:06.857 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:06.857 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.857 Compiler for C supports arguments -mpclmul: YES 00:03:06.857 Compiler for C supports arguments -maes: YES 00:03:06.857 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.857 Compiler for C supports arguments -mavx512bw: YES 00:03:06.857 Compiler for C supports arguments -mavx512dq: YES 00:03:06.857 Compiler for C supports arguments -mavx512vl: YES 00:03:06.857 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:06.857 Compiler for C supports arguments -mavx2: YES 00:03:06.857 Compiler for C supports arguments -mavx: YES 00:03:06.857 Message: lib/net: Defining dependency "net" 00:03:06.857 Message: lib/meter: Defining dependency "meter" 00:03:06.857 Message: lib/ethdev: Defining dependency "ethdev" 00:03:06.857 Message: lib/pci: Defining dependency "pci" 00:03:06.857 Message: lib/cmdline: Defining dependency "cmdline" 00:03:06.857 Message: lib/metrics: Defining dependency "metrics" 00:03:06.857 Message: lib/hash: Defining dependency "hash" 00:03:06.857 Message: lib/timer: Defining dependency "timer" 00:03:06.857 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.857 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:06.857 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:06.857 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:06.857 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:06.857 Message: lib/acl: Defining dependency "acl" 00:03:06.857 Message: lib/bbdev: Defining dependency "bbdev" 00:03:06.857 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:06.857 Run-time dependency libelf found: YES 0.191 00:03:06.857 Message: lib/bpf: Defining dependency "bpf" 00:03:06.857 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:06.857 Message: lib/compressdev: Defining dependency "compressdev" 00:03:06.857 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:06.857 Message: lib/distributor: Defining dependency "distributor" 00:03:06.857 Message: lib/dmadev: Defining dependency "dmadev" 00:03:06.857 Message: lib/efd: Defining dependency "efd" 00:03:06.857 Message: lib/eventdev: Defining dependency "eventdev" 00:03:06.857 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:06.857 Message: lib/gpudev: Defining dependency "gpudev" 00:03:06.857 Message: lib/gro: Defining dependency "gro" 00:03:06.857 Message: lib/gso: Defining dependency "gso" 00:03:06.857 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:06.857 Message: lib/jobstats: Defining dependency "jobstats" 00:03:06.857 Message: lib/latencystats: Defining dependency "latencystats" 00:03:06.857 Message: lib/lpm: Defining dependency "lpm" 00:03:06.857 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.857 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:06.857 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:06.857 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:06.857 Message: lib/member: Defining dependency "member" 00:03:06.857 Message: lib/pcapng: Defining dependency "pcapng" 00:03:06.857 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:06.858 Message: lib/power: Defining dependency "power" 00:03:06.858 Message: lib/rawdev: Defining dependency "rawdev" 00:03:06.858 Message: lib/regexdev: Defining dependency "regexdev" 00:03:06.858 Message: lib/mldev: Defining dependency "mldev" 00:03:06.858 Message: lib/rib: Defining dependency "rib" 00:03:06.858 Message: lib/reorder: Defining dependency "reorder" 00:03:06.858 Message: lib/sched: Defining dependency "sched" 00:03:06.858 Message: lib/security: Defining dependency "security" 00:03:06.858 Message: lib/stack: Defining dependency "stack" 00:03:06.858 Has header "linux/userfaultfd.h" : YES 00:03:06.858 Has header "linux/vduse.h" : YES 00:03:06.858 Message: lib/vhost: Defining dependency "vhost" 00:03:06.858 Message: lib/ipsec: Defining dependency "ipsec" 00:03:06.858 Message: lib/pdcp: Defining dependency "pdcp" 00:03:06.858 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.858 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:06.858 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:06.858 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:06.858 Message: lib/fib: Defining dependency "fib" 00:03:06.858 Message: lib/port: Defining dependency "port" 00:03:06.858 Message: lib/pdump: Defining dependency "pdump" 00:03:06.858 Message: lib/table: Defining dependency "table" 00:03:06.858 Message: lib/pipeline: Defining dependency "pipeline" 00:03:06.858 Message: lib/graph: Defining dependency "graph" 00:03:06.858 Message: lib/node: Defining dependency "node" 00:03:06.858 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.848 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.848 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.848 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.848 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:08.848 Compiler for C supports arguments -Wno-unused-value: YES 00:03:08.848 Compiler for C supports arguments -Wno-format: YES 00:03:08.848 Compiler for C supports arguments -Wno-format-security: YES 00:03:08.848 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:08.848 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:08.848 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:08.848 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:08.848 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:08.848 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.848 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:08.848 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:08.849 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:08.849 Has header "sys/epoll.h" : YES 00:03:08.849 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:08.849 Configuring doxy-api-html.conf using configuration 00:03:08.849 Configuring doxy-api-man.conf using configuration 00:03:08.849 Program mandb found: YES (/usr/bin/mandb) 00:03:08.849 Program sphinx-build found: NO 00:03:08.849 Configuring rte_build_config.h using configuration 00:03:08.849 Message: 00:03:08.849 ================= 00:03:08.849 Applications Enabled 00:03:08.849 ================= 00:03:08.849 00:03:08.849 apps: 00:03:08.849 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:08.849 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:08.849 test-pmd, test-regex, test-sad, test-security-perf, 00:03:08.849 00:03:08.849 Message: 00:03:08.849 ================= 00:03:08.849 Libraries Enabled 00:03:08.849 ================= 00:03:08.849 00:03:08.849 libs: 00:03:08.849 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.849 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:08.849 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:08.849 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:08.849 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:08.849 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:08.849 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:08.849 00:03:08.849 00:03:08.849 Message: 00:03:08.849 =============== 00:03:08.849 Drivers Enabled 00:03:08.849 =============== 00:03:08.849 00:03:08.849 common: 00:03:08.849 00:03:08.849 bus: 00:03:08.849 pci, vdev, 00:03:08.849 mempool: 00:03:08.849 ring, 00:03:08.849 dma: 00:03:08.849 00:03:08.849 net: 00:03:08.849 i40e, 00:03:08.849 raw: 00:03:08.849 00:03:08.849 crypto: 00:03:08.849 00:03:08.849 compress: 00:03:08.849 00:03:08.849 regex: 00:03:08.849 00:03:08.849 ml: 00:03:08.849 00:03:08.849 vdpa: 00:03:08.849 00:03:08.849 event: 00:03:08.849 00:03:08.849 baseband: 00:03:08.849 00:03:08.849 gpu: 00:03:08.849 00:03:08.849 00:03:08.849 Message: 00:03:08.849 ================= 00:03:08.849 Content Skipped 00:03:08.849 ================= 00:03:08.849 00:03:08.849 apps: 00:03:08.849 00:03:08.849 libs: 00:03:08.849 00:03:08.849 drivers: 00:03:08.849 common/cpt: not in enabled drivers build config 00:03:08.849 common/dpaax: not in enabled drivers build config 00:03:08.849 common/iavf: not in enabled drivers build config 00:03:08.849 common/idpf: not in enabled drivers build config 00:03:08.849 common/mvep: not in enabled drivers build config 00:03:08.849 common/octeontx: not in enabled drivers build config 00:03:08.849 bus/auxiliary: not in enabled drivers build config 00:03:08.849 bus/cdx: not in enabled drivers build config 00:03:08.849 bus/dpaa: not in enabled drivers build config 00:03:08.849 bus/fslmc: not in enabled drivers build config 00:03:08.849 bus/ifpga: not in enabled drivers build config 00:03:08.849 bus/platform: not in enabled drivers build config 00:03:08.849 bus/vmbus: not in enabled drivers build config 00:03:08.849 common/cnxk: not in enabled drivers build config 00:03:08.849 common/mlx5: not in enabled drivers build config 00:03:08.849 common/nfp: not in enabled drivers build config 00:03:08.849 common/qat: not in enabled drivers build config 00:03:08.849 common/sfc_efx: not in enabled drivers build config 00:03:08.849 mempool/bucket: not in enabled drivers build config 00:03:08.849 mempool/cnxk: not in enabled drivers build config 00:03:08.849 mempool/dpaa: not in enabled drivers build config 00:03:08.849 mempool/dpaa2: not in enabled drivers build config 00:03:08.849 mempool/octeontx: not in enabled drivers build config 00:03:08.849 mempool/stack: not in enabled drivers build config 00:03:08.849 dma/cnxk: not in enabled drivers build config 00:03:08.849 dma/dpaa: not in enabled drivers build config 00:03:08.849 dma/dpaa2: not in enabled drivers build config 00:03:08.849 dma/hisilicon: not in enabled drivers build config 00:03:08.849 dma/idxd: not in enabled drivers build config 00:03:08.849 dma/ioat: not in enabled drivers build config 00:03:08.849 dma/skeleton: not in enabled drivers build config 00:03:08.849 net/af_packet: not in enabled drivers build config 00:03:08.849 net/af_xdp: not in enabled drivers build config 00:03:08.849 net/ark: not in enabled drivers build config 00:03:08.849 net/atlantic: not in enabled drivers build config 00:03:08.849 net/avp: not in enabled drivers build config 00:03:08.849 net/axgbe: not in enabled drivers build config 00:03:08.849 net/bnx2x: not in enabled drivers build config 00:03:08.849 net/bnxt: not in enabled drivers build config 00:03:08.849 net/bonding: not in enabled drivers build config 00:03:08.849 net/cnxk: not in enabled drivers build config 00:03:08.849 net/cpfl: not in enabled drivers build config 00:03:08.849 net/cxgbe: not in enabled drivers build config 00:03:08.849 net/dpaa: not in enabled drivers build config 00:03:08.849 net/dpaa2: not in enabled drivers build config 00:03:08.849 net/e1000: not in enabled drivers build config 00:03:08.849 net/ena: not in enabled drivers build config 00:03:08.849 net/enetc: not in enabled drivers build config 00:03:08.849 net/enetfec: not in enabled drivers build config 00:03:08.849 net/enic: not in enabled drivers build config 00:03:08.849 net/failsafe: not in enabled drivers build config 00:03:08.849 net/fm10k: not in enabled drivers build config 00:03:08.849 net/gve: not in enabled drivers build config 00:03:08.849 net/hinic: not in enabled drivers build config 00:03:08.849 net/hns3: not in enabled drivers build config 00:03:08.849 net/iavf: not in enabled drivers build config 00:03:08.849 net/ice: not in enabled drivers build config 00:03:08.849 net/idpf: not in enabled drivers build config 00:03:08.849 net/igc: not in enabled drivers build config 00:03:08.849 net/ionic: not in enabled drivers build config 00:03:08.849 net/ipn3ke: not in enabled drivers build config 00:03:08.849 net/ixgbe: not in enabled drivers build config 00:03:08.849 net/mana: not in enabled drivers build config 00:03:08.849 net/memif: not in enabled drivers build config 00:03:08.849 net/mlx4: not in enabled drivers build config 00:03:08.849 net/mlx5: not in enabled drivers build config 00:03:08.849 net/mvneta: not in enabled drivers build config 00:03:08.849 net/mvpp2: not in enabled drivers build config 00:03:08.849 net/netvsc: not in enabled drivers build config 00:03:08.849 net/nfb: not in enabled drivers build config 00:03:08.849 net/nfp: not in enabled drivers build config 00:03:08.849 net/ngbe: not in enabled drivers build config 00:03:08.849 net/null: not in enabled drivers build config 00:03:08.849 net/octeontx: not in enabled drivers build config 00:03:08.849 net/octeon_ep: not in enabled drivers build config 00:03:08.849 net/pcap: not in enabled drivers build config 00:03:08.849 net/pfe: not in enabled drivers build config 00:03:08.849 net/qede: not in enabled drivers build config 00:03:08.849 net/ring: not in enabled drivers build config 00:03:08.849 net/sfc: not in enabled drivers build config 00:03:08.849 net/softnic: not in enabled drivers build config 00:03:08.849 net/tap: not in enabled drivers build config 00:03:08.849 net/thunderx: not in enabled drivers build config 00:03:08.849 net/txgbe: not in enabled drivers build config 00:03:08.849 net/vdev_netvsc: not in enabled drivers build config 00:03:08.849 net/vhost: not in enabled drivers build config 00:03:08.849 net/virtio: not in enabled drivers build config 00:03:08.849 net/vmxnet3: not in enabled drivers build config 00:03:08.849 raw/cnxk_bphy: not in enabled drivers build config 00:03:08.849 raw/cnxk_gpio: not in enabled drivers build config 00:03:08.849 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:08.849 raw/ifpga: not in enabled drivers build config 00:03:08.849 raw/ntb: not in enabled drivers build config 00:03:08.849 raw/skeleton: not in enabled drivers build config 00:03:08.849 crypto/armv8: not in enabled drivers build config 00:03:08.849 crypto/bcmfs: not in enabled drivers build config 00:03:08.849 crypto/caam_jr: not in enabled drivers build config 00:03:08.849 crypto/ccp: not in enabled drivers build config 00:03:08.849 crypto/cnxk: not in enabled drivers build config 00:03:08.849 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.849 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.849 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.849 crypto/mlx5: not in enabled drivers build config 00:03:08.849 crypto/mvsam: not in enabled drivers build config 00:03:08.849 crypto/nitrox: not in enabled drivers build config 00:03:08.849 crypto/null: not in enabled drivers build config 00:03:08.849 crypto/octeontx: not in enabled drivers build config 00:03:08.849 crypto/openssl: not in enabled drivers build config 00:03:08.849 crypto/scheduler: not in enabled drivers build config 00:03:08.849 crypto/uadk: not in enabled drivers build config 00:03:08.849 crypto/virtio: not in enabled drivers build config 00:03:08.849 compress/isal: not in enabled drivers build config 00:03:08.849 compress/mlx5: not in enabled drivers build config 00:03:08.849 compress/octeontx: not in enabled drivers build config 00:03:08.849 compress/zlib: not in enabled drivers build config 00:03:08.849 regex/mlx5: not in enabled drivers build config 00:03:08.849 regex/cn9k: not in enabled drivers build config 00:03:08.849 ml/cnxk: not in enabled drivers build config 00:03:08.849 vdpa/ifc: not in enabled drivers build config 00:03:08.849 vdpa/mlx5: not in enabled drivers build config 00:03:08.849 vdpa/nfp: not in enabled drivers build config 00:03:08.849 vdpa/sfc: not in enabled drivers build config 00:03:08.849 event/cnxk: not in enabled drivers build config 00:03:08.849 event/dlb2: not in enabled drivers build config 00:03:08.849 event/dpaa: not in enabled drivers build config 00:03:08.849 event/dpaa2: not in enabled drivers build config 00:03:08.849 event/dsw: not in enabled drivers build config 00:03:08.849 event/opdl: not in enabled drivers build config 00:03:08.849 event/skeleton: not in enabled drivers build config 00:03:08.849 event/sw: not in enabled drivers build config 00:03:08.849 event/octeontx: not in enabled drivers build config 00:03:08.849 baseband/acc: not in enabled drivers build config 00:03:08.849 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:08.849 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:08.849 baseband/la12xx: not in enabled drivers build config 00:03:08.849 baseband/null: not in enabled drivers build config 00:03:08.850 baseband/turbo_sw: not in enabled drivers build config 00:03:08.850 gpu/cuda: not in enabled drivers build config 00:03:08.850 00:03:08.850 00:03:08.850 Build targets in project: 220 00:03:08.850 00:03:08.850 DPDK 23.11.0 00:03:08.850 00:03:08.850 User defined options 00:03:08.850 libdir : lib 00:03:08.850 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:08.850 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:08.850 c_link_args : 00:03:08.850 enable_docs : false 00:03:08.850 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:08.850 enable_kmods : false 00:03:08.850 machine : native 00:03:08.850 tests : false 00:03:08.850 00:03:08.850 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.850 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:08.850 18:23:24 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:09.108 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.108 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.108 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:09.108 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.108 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.108 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.108 [6/710] Linking static target lib/librte_kvargs.a 00:03:09.108 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.366 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.366 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.366 [10/710] Linking static target lib/librte_log.a 00:03:09.366 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.624 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:09.624 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:09.881 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.881 [15/710] Linking target lib/librte_log.so.24.0 00:03:09.881 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:09.881 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:09.881 [18/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.140 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.140 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.140 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.140 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.140 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:10.140 [24/710] Linking target lib/librte_kvargs.so.24.0 00:03:10.398 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:10.398 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.398 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.398 [28/710] Linking static target lib/librte_telemetry.a 00:03:10.398 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.398 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.398 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.657 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.657 [33/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.657 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.657 [35/710] Linking target lib/librte_telemetry.so.24.0 00:03:10.657 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.916 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.916 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.916 [39/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:10.916 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.916 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.916 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.916 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.916 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:11.175 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.175 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.434 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:11.434 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.434 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.434 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.434 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.434 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.434 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.692 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.692 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.951 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.951 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.951 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.951 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.951 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.951 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.951 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.951 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.951 [64/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:12.209 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:12.209 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:12.209 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:12.209 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:12.468 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:12.727 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:12.727 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.727 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.727 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.727 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.727 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.727 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:12.727 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.727 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.986 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.986 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:13.245 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:13.245 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:13.245 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:13.245 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:13.245 [85/710] Linking static target lib/librte_ring.a 00:03:13.504 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:13.504 [87/710] Linking static target lib/librte_eal.a 00:03:13.504 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:13.504 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.504 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:13.763 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:13.763 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:13.763 [93/710] Linking static target lib/librte_mempool.a 00:03:13.763 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:13.763 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:14.022 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:14.022 [97/710] Linking static target lib/librte_rcu.a 00:03:14.022 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:14.022 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:14.281 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:14.281 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.281 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.281 [103/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:14.540 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:14.540 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:14.540 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:14.540 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:14.540 [108/710] Linking static target lib/librte_mbuf.a 00:03:14.799 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:14.799 [110/710] Linking static target lib/librte_net.a 00:03:14.799 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:14.799 [112/710] Linking static target lib/librte_meter.a 00:03:15.058 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:15.058 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:15.058 [117/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:15.058 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:15.625 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:15.884 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:15.884 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:16.143 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:16.143 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:16.143 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:16.143 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.143 [127/710] Linking static target lib/librte_pci.a 00:03:16.402 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.402 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.402 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.402 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:16.402 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:16.661 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:16.661 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:16.661 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:16.661 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:16.661 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:16.661 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:16.661 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:16.661 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:16.921 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:16.921 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:16.921 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.179 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.179 [145/710] Linking static target lib/librte_cmdline.a 00:03:17.179 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:17.179 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:17.179 [148/710] Linking static target lib/librte_metrics.a 00:03:17.438 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.438 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.698 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.698 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.698 [153/710] Linking static target lib/librte_timer.a 00:03:17.698 [154/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.957 [155/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.216 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.474 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:18.733 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:18.733 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:18.733 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:19.301 [161/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:19.301 [162/710] Linking static target lib/librte_bitratestats.a 00:03:19.301 [163/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:19.301 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:19.301 [165/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:19.301 [166/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.301 [167/710] Linking static target lib/librte_ethdev.a 00:03:19.301 [168/710] Linking target lib/librte_eal.so.24.0 00:03:19.561 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.561 [170/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:19.561 [171/710] Linking target lib/librte_ring.so.24.0 00:03:19.561 [172/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:19.561 [173/710] Linking target lib/librte_meter.so.24.0 00:03:19.561 [174/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:19.561 [175/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:19.822 [176/710] Linking target lib/librte_pci.so.24.0 00:03:19.822 [177/710] Linking target lib/librte_rcu.so.24.0 00:03:19.822 [178/710] Linking target lib/librte_mempool.so.24.0 00:03:19.822 [179/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:19.822 [180/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:19.822 [181/710] Linking static target lib/librte_hash.a 00:03:19.822 [182/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:19.822 [183/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:19.822 [184/710] Linking target lib/librte_mbuf.so.24.0 00:03:19.822 [185/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:19.822 [186/710] Linking static target lib/librte_bbdev.a 00:03:19.822 [187/710] Linking target lib/librte_timer.so.24.0 00:03:20.080 [188/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:20.080 [189/710] Linking static target lib/acl/libavx2_tmp.a 00:03:20.080 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:20.080 [191/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:20.080 [192/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:20.080 [193/710] Linking target lib/librte_net.so.24.0 00:03:20.080 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:20.339 [195/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:20.339 [196/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:20.339 [197/710] Linking static target lib/acl/libavx512_tmp.a 00:03:20.339 [198/710] Linking target lib/librte_cmdline.so.24.0 00:03:20.339 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:20.339 [200/710] Linking static target lib/librte_acl.a 00:03:20.339 [201/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.598 [202/710] Linking target lib/librte_hash.so.24.0 00:03:20.598 [203/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:20.598 [204/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.598 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:20.598 [206/710] Linking target lib/librte_bbdev.so.24.0 00:03:20.598 [207/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.598 [208/710] Linking target lib/librte_acl.so.24.0 00:03:20.856 [209/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:20.856 [210/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:20.856 [211/710] Linking static target lib/librte_cfgfile.a 00:03:20.856 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:21.115 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:21.115 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:21.115 [215/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.115 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:03:21.115 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:21.374 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:21.374 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:21.633 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:21.633 [221/710] Linking static target lib/librte_bpf.a 00:03:21.633 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:21.633 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.633 [224/710] Linking static target lib/librte_compressdev.a 00:03:21.892 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.892 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.892 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:21.892 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:22.151 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.151 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:22.151 [231/710] Linking static target lib/librte_distributor.a 00:03:22.151 [232/710] Linking target lib/librte_compressdev.so.24.0 00:03:22.151 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:22.410 [234/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:22.410 [235/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.410 [236/710] Linking static target lib/librte_dmadev.a 00:03:22.410 [237/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:22.410 [238/710] Linking target lib/librte_distributor.so.24.0 00:03:22.669 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.928 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:22.928 [241/710] Linking target lib/librte_dmadev.so.24.0 00:03:22.928 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:23.188 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:23.188 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:23.188 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:23.188 [246/710] Linking static target lib/librte_efd.a 00:03:23.447 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:23.447 [248/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.447 [249/710] Linking target lib/librte_efd.so.24.0 00:03:23.447 [250/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.707 [251/710] Linking static target lib/librte_cryptodev.a 00:03:23.966 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:23.966 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:23.966 [254/710] Linking static target lib/librte_dispatcher.a 00:03:23.966 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.225 [256/710] Linking target lib/librte_ethdev.so.24.0 00:03:24.225 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:24.225 [258/710] Linking static target lib/librte_gpudev.a 00:03:24.225 [259/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:24.225 [260/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:24.225 [261/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:24.225 [262/710] Linking target lib/librte_metrics.so.24.0 00:03:24.225 [263/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:24.485 [264/710] Linking target lib/librte_bpf.so.24.0 00:03:24.485 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.485 [266/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:24.485 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:03:24.485 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:24.485 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:24.742 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:24.742 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.742 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:03:25.001 [273/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.001 [274/710] Linking target lib/librte_gpudev.so.24.0 00:03:25.001 [275/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:25.001 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:25.001 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:25.001 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:25.001 [279/710] Linking static target lib/librte_eventdev.a 00:03:25.259 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:25.259 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:25.259 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:25.259 [283/710] Linking static target lib/librte_gro.a 00:03:25.259 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:25.518 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:25.518 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.518 [287/710] Linking target lib/librte_gro.so.24.0 00:03:25.518 [288/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:25.518 [289/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:25.518 [290/710] Linking static target lib/librte_gso.a 00:03:25.777 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.777 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:25.777 [293/710] Linking target lib/librte_gso.so.24.0 00:03:25.777 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:25.777 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:26.037 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:26.037 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:26.037 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:26.037 [299/710] Linking static target lib/librte_jobstats.a 00:03:26.296 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:26.296 [301/710] Linking static target lib/librte_ip_frag.a 00:03:26.296 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:26.296 [303/710] Linking static target lib/librte_latencystats.a 00:03:26.567 [304/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:26.567 [305/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.567 [306/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:26.567 [307/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:26.567 [308/710] Linking target lib/librte_jobstats.so.24.0 00:03:26.567 [309/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.567 [310/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.567 [311/710] Linking target lib/librte_latencystats.so.24.0 00:03:26.567 [312/710] Linking target lib/librte_ip_frag.so.24.0 00:03:26.567 [313/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:26.567 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:26.855 [315/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:26.855 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:26.855 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:27.121 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.121 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:27.121 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:27.121 [321/710] Linking static target lib/librte_lpm.a 00:03:27.121 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:27.121 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:27.379 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:03:27.379 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:27.379 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:27.379 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:27.379 [328/710] Linking static target lib/librte_pcapng.a 00:03:27.379 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:27.379 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.379 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:27.638 [332/710] Linking target lib/librte_lpm.so.24.0 00:03:27.638 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:27.638 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:27.638 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.638 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:27.897 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:27.897 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:27.897 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:28.156 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:28.156 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:28.156 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:28.156 [343/710] Linking static target lib/librte_power.a 00:03:28.156 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:28.156 [345/710] Linking static target lib/librte_regexdev.a 00:03:28.156 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:28.156 [347/710] Linking static target lib/librte_rawdev.a 00:03:28.415 [348/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:28.415 [349/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:28.415 [350/710] Linking static target lib/librte_member.a 00:03:28.415 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:28.674 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:28.674 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:28.674 [354/710] Linking static target lib/librte_mldev.a 00:03:28.674 [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.674 [356/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.674 [357/710] Linking target lib/librte_member.so.24.0 00:03:28.674 [358/710] Linking target lib/librte_rawdev.so.24.0 00:03:28.674 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.933 [360/710] Linking target lib/librte_power.so.24.0 00:03:28.933 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:28.933 [362/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.933 [363/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:28.933 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:29.192 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:29.192 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:29.192 [367/710] Linking static target lib/librte_reorder.a 00:03:29.192 [368/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:29.192 [369/710] Linking static target lib/librte_rib.a 00:03:29.451 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:29.451 [371/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:29.451 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:29.451 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:29.451 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.451 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:29.451 [376/710] Linking static target lib/librte_stack.a 00:03:29.451 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:29.709 [378/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.709 [379/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:29.709 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:29.709 [381/710] Linking static target lib/librte_security.a 00:03:29.709 [382/710] Linking target lib/librte_rib.so.24.0 00:03:29.709 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.709 [384/710] Linking target lib/librte_stack.so.24.0 00:03:29.709 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.709 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:29.968 [387/710] Linking target lib/librte_mldev.so.24.0 00:03:29.968 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.968 [389/710] Linking target lib/librte_security.so.24.0 00:03:30.227 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:30.227 [391/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:30.227 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:30.227 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:30.486 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:30.486 [395/710] Linking static target lib/librte_sched.a 00:03:30.744 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:30.744 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.744 [398/710] Linking target lib/librte_sched.so.24.0 00:03:31.003 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:31.003 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:31.003 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:31.003 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:31.262 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:31.262 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:31.520 [405/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:31.520 [406/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:31.779 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:31.779 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:32.037 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:32.037 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:32.296 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:32.296 [412/710] Linking static target lib/librte_ipsec.a 00:03:32.296 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:32.296 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:32.296 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:32.296 [416/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:32.554 [417/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:32.554 [418/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.554 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:32.554 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:32.554 [421/710] Linking target lib/librte_ipsec.so.24.0 00:03:32.813 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:32.813 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:33.381 [424/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:33.381 [425/710] Linking static target lib/librte_pdcp.a 00:03:33.381 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:33.381 [427/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:33.381 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:33.381 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:33.639 [430/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:33.639 [431/710] Linking static target lib/librte_fib.a 00:03:33.639 [432/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.639 [433/710] Linking target lib/librte_pdcp.so.24.0 00:03:33.639 [434/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:33.898 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.898 [436/710] Linking target lib/librte_fib.so.24.0 00:03:34.157 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:34.416 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:34.416 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:34.674 [440/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:34.674 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:34.674 [442/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:34.674 [443/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:34.933 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:35.192 [445/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:35.192 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:35.192 [447/710] Linking static target lib/librte_port.a 00:03:35.192 [448/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:35.450 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:35.450 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:35.450 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:35.450 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:35.709 [453/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:35.709 [454/710] Linking static target lib/librte_pdump.a 00:03:35.709 [455/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.709 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:35.709 [457/710] Linking target lib/librte_port.so.24.0 00:03:35.709 [458/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:35.967 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:35.967 [460/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:35.967 [461/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.967 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:36.534 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:36.534 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:36.534 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:36.534 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:36.534 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:36.534 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:37.100 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:37.100 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:37.100 [471/710] Linking static target lib/librte_table.a 00:03:37.100 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:37.100 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:37.667 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.667 [475/710] Linking target lib/librte_table.so.24.0 00:03:37.926 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:37.926 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:37.926 [478/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:37.926 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:38.184 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:38.443 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:38.443 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:38.701 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:38.701 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:38.701 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:38.701 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:39.272 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:39.272 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:39.531 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:39.531 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:39.531 [491/710] Linking static target lib/librte_graph.a 00:03:39.531 [492/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:39.531 [493/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:40.095 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:40.095 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:40.095 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.095 [497/710] Linking target lib/librte_graph.so.24.0 00:03:40.095 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:40.095 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:40.662 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:40.662 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:40.662 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:40.662 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:40.662 [504/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:40.662 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:40.920 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:41.183 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:41.183 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:41.468 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:41.468 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:41.468 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.468 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:41.468 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:41.468 [514/710] Linking static target lib/librte_node.a 00:03:41.741 [515/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.741 [516/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:41.999 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:41.999 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:41.999 [519/710] Linking target lib/librte_node.so.24.0 00:03:41.999 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:41.999 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:41.999 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:41.999 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.258 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:42.258 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.258 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.258 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:42.258 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.516 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.516 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.516 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:42.516 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:42.516 [533/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:42.516 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:42.774 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:42.774 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.774 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.774 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.774 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:43.032 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:43.032 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.032 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:43.032 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.032 [544/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:43.032 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:43.032 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:43.598 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:43.857 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:43.857 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:43.857 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:43.857 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:44.791 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:44.791 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:44.791 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:45.049 [555/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:45.049 [556/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:45.049 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:45.615 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:45.615 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:45.615 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:45.874 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:45.874 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:46.439 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:46.439 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:46.697 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:46.697 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:46.955 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:47.214 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:47.214 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:47.214 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:47.214 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:47.472 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:47.472 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:47.472 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:47.730 [575/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:47.730 [576/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:47.730 [577/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:47.730 [578/710] Linking static target lib/librte_vhost.a 00:03:47.730 [579/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:47.730 [580/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:47.988 [581/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:47.988 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:48.247 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:48.247 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:48.247 [585/710] Linking static target drivers/librte_net_i40e.a 00:03:48.506 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:48.506 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:48.506 [588/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:48.506 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:48.506 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:48.764 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:48.764 [592/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.023 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:49.023 [594/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:49.023 [595/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.023 [596/710] Linking target lib/librte_vhost.so.24.0 00:03:49.281 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:49.281 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:49.544 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:49.807 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:49.807 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:50.065 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:50.065 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:50.065 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:50.065 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:50.324 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:50.324 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:50.890 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:50.890 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:50.890 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:50.890 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:50.891 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:51.149 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:51.149 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:51.149 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:51.149 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:51.149 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:51.408 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:51.667 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:51.925 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:51.925 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:51.925 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:52.184 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:52.752 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:53.011 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:53.011 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:53.011 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:53.269 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:53.269 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:53.269 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:53.270 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:53.528 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:53.528 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:53.528 [634/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:53.787 [635/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:53.787 [636/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:53.787 [637/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:53.787 [638/710] Linking static target lib/librte_pipeline.a 00:03:54.046 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:54.046 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:54.305 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:54.305 [642/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:54.305 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:54.305 [644/710] Linking target app/dpdk-dumpcap 00:03:54.564 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:54.564 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:54.564 [647/710] Linking target app/dpdk-graph 00:03:54.823 [648/710] Linking target app/dpdk-pdump 00:03:54.823 [649/710] Linking target app/dpdk-proc-info 00:03:54.823 [650/710] Linking target app/dpdk-test-cmdline 00:03:54.823 [651/710] Linking target app/dpdk-test-acl 00:03:54.823 [652/710] Linking target app/dpdk-test-compress-perf 00:03:55.082 [653/710] Linking target app/dpdk-test-dma-perf 00:03:55.082 [654/710] Linking target app/dpdk-test-crypto-perf 00:03:55.341 [655/710] Linking target app/dpdk-test-eventdev 00:03:55.341 [656/710] Linking target app/dpdk-test-fib 00:03:55.341 [657/710] Linking target app/dpdk-test-flow-perf 00:03:55.341 [658/710] Linking target app/dpdk-test-gpudev 00:03:55.599 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:55.600 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:55.600 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:55.858 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:55.858 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:55.858 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:55.858 [665/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:56.117 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:56.117 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:56.117 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:56.376 [669/710] Linking target app/dpdk-test-mldev 00:03:56.376 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:56.376 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:56.635 [672/710] Linking target app/dpdk-test-bbdev 00:03:56.635 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:56.635 [674/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.635 [675/710] Linking target lib/librte_pipeline.so.24.0 00:03:56.893 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:56.893 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:56.893 [678/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:57.152 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:57.411 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:57.411 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:57.411 [682/710] Linking target app/dpdk-test-pipeline 00:03:57.670 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:57.670 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:57.928 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:58.187 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:58.187 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:58.187 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:58.446 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:58.704 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:58.704 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:58.704 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:58.963 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:58.963 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:59.532 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:59.532 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:59.791 [697/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:59.791 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:59.791 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:00.050 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:00.050 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:00.050 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:00.050 [703/710] Linking target app/dpdk-test-sad 00:04:00.309 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:00.309 [705/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:00.309 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:00.309 [707/710] Linking target app/dpdk-test-regex 00:04:00.877 [708/710] Linking target app/dpdk-testpmd 00:04:00.877 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:01.472 [710/710] Linking target app/dpdk-test-security-perf 00:04:01.472 18:24:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:04:01.472 18:24:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:01.472 18:24:16 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:01.472 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:01.472 [0/1] Installing files. 00:04:01.734 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.734 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.735 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:01.736 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:01.737 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:01.738 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:01.738 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.739 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.998 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.999 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:02.260 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:02.260 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:02.260 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.260 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:02.260 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.260 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.261 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.262 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.522 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:02.523 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:02.523 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:02.523 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:02.523 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:02.523 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:02.523 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:02.523 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:02.523 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:02.523 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:02.523 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:02.523 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:02.523 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:02.523 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:02.523 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:02.523 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:02.523 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:02.523 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:02.523 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:02.523 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:02.523 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:02.523 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:02.523 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:02.523 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:02.523 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:02.523 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:02.523 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:02.523 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:02.523 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:02.523 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:02.523 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:02.523 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:02.523 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:02.523 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:02.523 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:02.523 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:02.523 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:02.523 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:02.523 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:02.523 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:02.523 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:02.523 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:02.523 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:02.523 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:02.523 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:02.523 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:02.523 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:02.523 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:02.523 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:02.523 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:02.523 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:02.523 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:02.523 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:02.523 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:02.523 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:02.523 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:02.523 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:02.523 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:02.523 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:02.523 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:02.523 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:02.523 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:02.523 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:02.523 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:02.523 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:02.523 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:02.523 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:02.523 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:02.523 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:02.523 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:02.523 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:02.523 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:02.523 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:02.523 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:02.523 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:02.523 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:02.523 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:02.523 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:02.523 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:02.523 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:02.523 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:02.523 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:02.523 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:02.523 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:02.523 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:02.524 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:02.524 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:02.524 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:02.524 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:02.524 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:02.524 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:02.524 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:02.524 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:02.524 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:02.524 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:02.524 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:02.524 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:02.524 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:02.524 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:02.524 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:02.524 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:02.524 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:02.524 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:02.524 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:02.524 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:02.524 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:02.524 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:02.524 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:02.524 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:02.524 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:02.524 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:02.524 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:02.524 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:02.524 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:02.524 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:02.524 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:02.524 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:02.524 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:02.524 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:02.524 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:02.524 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:02.524 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:02.524 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:02.524 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:02.524 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:02.524 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:02.524 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:02.524 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:02.524 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:02.524 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:02.524 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:02.524 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:02.524 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:02.524 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:02.524 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:02.524 18:24:18 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:02.524 ************************************ 00:04:02.524 END TEST build_native_dpdk 00:04:02.524 ************************************ 00:04:02.524 18:24:18 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:02.524 00:04:02.524 real 1m1.043s 00:04:02.524 user 7m16.078s 00:04:02.524 sys 1m12.788s 00:04:02.524 18:24:18 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:02.524 18:24:18 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:02.524 18:24:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:02.524 18:24:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:02.524 18:24:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:04:02.524 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:02.783 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:02.783 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:02.783 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:03.350 Using 'verbs' RDMA provider 00:04:19.168 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:31.374 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:31.374 go version go1.21.1 linux/amd64 00:04:31.374 Creating mk/config.mk...done. 00:04:31.374 Creating mk/cc.flags.mk...done. 00:04:31.374 Type 'make' to build. 00:04:31.374 18:24:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:31.374 18:24:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:31.374 18:24:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:31.374 18:24:46 -- common/autotest_common.sh@10 -- $ set +x 00:04:31.374 ************************************ 00:04:31.374 START TEST make 00:04:31.374 ************************************ 00:04:31.374 18:24:46 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:31.374 make[1]: Nothing to be done for 'all'. 00:04:32.751 The Meson build system 00:04:32.751 Version: 1.5.0 00:04:32.751 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:32.751 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:32.751 Build type: native build 00:04:32.751 Project name: libvfio-user 00:04:32.751 Project version: 0.0.1 00:04:32.751 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:32.751 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:32.751 Host machine cpu family: x86_64 00:04:32.751 Host machine cpu: x86_64 00:04:32.751 Run-time dependency threads found: YES 00:04:32.751 Library dl found: YES 00:04:32.751 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:32.751 Run-time dependency json-c found: YES 0.17 00:04:32.751 Run-time dependency cmocka found: YES 1.1.7 00:04:32.751 Program pytest-3 found: NO 00:04:32.751 Program flake8 found: NO 00:04:32.751 Program misspell-fixer found: NO 00:04:32.751 Program restructuredtext-lint found: NO 00:04:32.751 Program valgrind found: YES (/usr/bin/valgrind) 00:04:32.751 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:32.751 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:32.751 Compiler for C supports arguments -Wwrite-strings: YES 00:04:32.751 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:32.751 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:32.751 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:32.751 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:32.751 Build targets in project: 8 00:04:32.751 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:32.751 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:32.751 00:04:32.751 libvfio-user 0.0.1 00:04:32.751 00:04:32.751 User defined options 00:04:32.751 buildtype : debug 00:04:32.751 default_library: shared 00:04:32.751 libdir : /usr/local/lib 00:04:32.751 00:04:32.751 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:33.011 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:33.270 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:33.270 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:33.270 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:33.270 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:33.270 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:33.270 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:33.270 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:33.270 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:33.270 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:33.270 [10/37] Compiling C object samples/null.p/null.c.o 00:04:33.270 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:33.270 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:33.270 [13/37] Compiling C object samples/client.p/client.c.o 00:04:33.528 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:33.528 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:33.528 [16/37] Linking target samples/client 00:04:33.528 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:33.528 [18/37] Compiling C object samples/server.p/server.c.o 00:04:33.528 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:33.528 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:33.528 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:33.528 [22/37] Linking target lib/libvfio-user.so.0.0.1 00:04:33.528 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:33.528 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:33.528 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:33.528 [26/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:33.528 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:33.528 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:33.788 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:33.788 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:33.788 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:33.788 [32/37] Linking target test/unit_tests 00:04:33.788 [33/37] Linking target samples/server 00:04:33.788 [34/37] Linking target samples/shadow_ioeventfd_server 00:04:33.788 [35/37] Linking target samples/lspci 00:04:33.788 [36/37] Linking target samples/null 00:04:33.788 [37/37] Linking target samples/gpio-pci-idio-16 00:04:33.788 INFO: autodetecting backend as ninja 00:04:33.788 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:33.788 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:34.355 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:34.355 ninja: no work to do. 00:05:21.027 CC lib/ut/ut.o 00:05:21.027 CC lib/ut_mock/mock.o 00:05:21.027 CC lib/log/log.o 00:05:21.027 CC lib/log/log_flags.o 00:05:21.027 CC lib/log/log_deprecated.o 00:05:21.027 LIB libspdk_ut.a 00:05:21.027 LIB libspdk_ut_mock.a 00:05:21.027 LIB libspdk_log.a 00:05:21.027 SO libspdk_ut_mock.so.6.0 00:05:21.027 SO libspdk_ut.so.2.0 00:05:21.027 SO libspdk_log.so.7.0 00:05:21.027 SYMLINK libspdk_ut_mock.so 00:05:21.027 SYMLINK libspdk_ut.so 00:05:21.027 SYMLINK libspdk_log.so 00:05:21.291 CC lib/dma/dma.o 00:05:21.291 CXX lib/trace_parser/trace.o 00:05:21.291 CC lib/ioat/ioat.o 00:05:21.291 CC lib/util/base64.o 00:05:21.291 CC lib/util/bit_array.o 00:05:21.291 CC lib/util/cpuset.o 00:05:21.291 CC lib/util/crc32.o 00:05:21.291 CC lib/util/crc32c.o 00:05:21.291 CC lib/util/crc16.o 00:05:21.549 CC lib/vfio_user/host/vfio_user_pci.o 00:05:21.549 CC lib/vfio_user/host/vfio_user.o 00:05:21.549 CC lib/util/crc32_ieee.o 00:05:21.549 CC lib/util/crc64.o 00:05:21.549 CC lib/util/dif.o 00:05:21.549 LIB libspdk_dma.a 00:05:21.549 CC lib/util/fd.o 00:05:21.549 SO libspdk_dma.so.5.0 00:05:21.549 CC lib/util/fd_group.o 00:05:21.549 SYMLINK libspdk_dma.so 00:05:21.549 CC lib/util/file.o 00:05:21.549 CC lib/util/hexlify.o 00:05:21.808 CC lib/util/iov.o 00:05:21.808 LIB libspdk_ioat.a 00:05:21.808 SO libspdk_ioat.so.7.0 00:05:21.808 LIB libspdk_vfio_user.a 00:05:21.808 CC lib/util/math.o 00:05:21.808 CC lib/util/net.o 00:05:21.808 SYMLINK libspdk_ioat.so 00:05:21.808 CC lib/util/pipe.o 00:05:21.808 SO libspdk_vfio_user.so.5.0 00:05:21.808 CC lib/util/strerror_tls.o 00:05:21.808 SYMLINK libspdk_vfio_user.so 00:05:21.808 CC lib/util/string.o 00:05:21.808 CC lib/util/uuid.o 00:05:21.808 CC lib/util/xor.o 00:05:21.808 CC lib/util/zipf.o 00:05:21.808 CC lib/util/md5.o 00:05:22.066 LIB libspdk_util.a 00:05:22.325 SO libspdk_util.so.10.0 00:05:22.325 LIB libspdk_trace_parser.a 00:05:22.325 SO libspdk_trace_parser.so.6.0 00:05:22.325 SYMLINK libspdk_util.so 00:05:22.325 SYMLINK libspdk_trace_parser.so 00:05:22.584 CC lib/vmd/vmd.o 00:05:22.584 CC lib/rdma_utils/rdma_utils.o 00:05:22.584 CC lib/conf/conf.o 00:05:22.584 CC lib/vmd/led.o 00:05:22.584 CC lib/json/json_parse.o 00:05:22.584 CC lib/idxd/idxd.o 00:05:22.584 CC lib/json/json_util.o 00:05:22.584 CC lib/json/json_write.o 00:05:22.584 CC lib/rdma_provider/common.o 00:05:22.584 CC lib/env_dpdk/env.o 00:05:22.843 CC lib/env_dpdk/memory.o 00:05:22.843 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:22.843 LIB libspdk_rdma_utils.a 00:05:22.843 CC lib/idxd/idxd_user.o 00:05:22.843 CC lib/env_dpdk/pci.o 00:05:22.843 SO libspdk_rdma_utils.so.1.0 00:05:22.843 LIB libspdk_conf.a 00:05:22.843 SO libspdk_conf.so.6.0 00:05:22.843 LIB libspdk_json.a 00:05:22.843 SYMLINK libspdk_rdma_utils.so 00:05:22.843 CC lib/env_dpdk/init.o 00:05:22.843 SYMLINK libspdk_conf.so 00:05:22.843 SO libspdk_json.so.6.0 00:05:22.843 CC lib/env_dpdk/threads.o 00:05:22.843 LIB libspdk_rdma_provider.a 00:05:23.103 SYMLINK libspdk_json.so 00:05:23.103 CC lib/env_dpdk/pci_ioat.o 00:05:23.103 SO libspdk_rdma_provider.so.6.0 00:05:23.103 CC lib/env_dpdk/pci_virtio.o 00:05:23.103 SYMLINK libspdk_rdma_provider.so 00:05:23.103 CC lib/idxd/idxd_kernel.o 00:05:23.103 CC lib/env_dpdk/pci_vmd.o 00:05:23.103 CC lib/env_dpdk/pci_idxd.o 00:05:23.103 CC lib/env_dpdk/pci_event.o 00:05:23.103 CC lib/env_dpdk/sigbus_handler.o 00:05:23.103 CC lib/env_dpdk/pci_dpdk.o 00:05:23.103 LIB libspdk_vmd.a 00:05:23.362 SO libspdk_vmd.so.6.0 00:05:23.362 CC lib/jsonrpc/jsonrpc_server.o 00:05:23.362 LIB libspdk_idxd.a 00:05:23.362 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:23.362 SO libspdk_idxd.so.12.1 00:05:23.362 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:23.362 SYMLINK libspdk_vmd.so 00:05:23.362 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:23.362 CC lib/jsonrpc/jsonrpc_client.o 00:05:23.362 SYMLINK libspdk_idxd.so 00:05:23.362 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:23.621 LIB libspdk_jsonrpc.a 00:05:23.621 SO libspdk_jsonrpc.so.6.0 00:05:23.621 SYMLINK libspdk_jsonrpc.so 00:05:23.880 LIB libspdk_env_dpdk.a 00:05:23.880 CC lib/rpc/rpc.o 00:05:24.138 SO libspdk_env_dpdk.so.15.0 00:05:24.138 SYMLINK libspdk_env_dpdk.so 00:05:24.138 LIB libspdk_rpc.a 00:05:24.138 SO libspdk_rpc.so.6.0 00:05:24.138 SYMLINK libspdk_rpc.so 00:05:24.396 CC lib/trace/trace.o 00:05:24.396 CC lib/trace/trace_flags.o 00:05:24.396 CC lib/trace/trace_rpc.o 00:05:24.396 CC lib/keyring/keyring_rpc.o 00:05:24.396 CC lib/keyring/keyring.o 00:05:24.396 CC lib/notify/notify_rpc.o 00:05:24.396 CC lib/notify/notify.o 00:05:24.655 LIB libspdk_notify.a 00:05:24.655 LIB libspdk_keyring.a 00:05:24.655 LIB libspdk_trace.a 00:05:24.655 SO libspdk_notify.so.6.0 00:05:24.655 SO libspdk_keyring.so.2.0 00:05:24.655 SO libspdk_trace.so.11.0 00:05:24.914 SYMLINK libspdk_notify.so 00:05:24.914 SYMLINK libspdk_keyring.so 00:05:24.914 SYMLINK libspdk_trace.so 00:05:24.914 CC lib/sock/sock.o 00:05:24.914 CC lib/sock/sock_rpc.o 00:05:24.914 CC lib/thread/iobuf.o 00:05:24.914 CC lib/thread/thread.o 00:05:25.480 LIB libspdk_sock.a 00:05:25.480 SO libspdk_sock.so.10.0 00:05:25.739 SYMLINK libspdk_sock.so 00:05:25.998 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:25.998 CC lib/nvme/nvme_ctrlr.o 00:05:25.998 CC lib/nvme/nvme_fabric.o 00:05:25.998 CC lib/nvme/nvme_ns_cmd.o 00:05:25.998 CC lib/nvme/nvme_ns.o 00:05:25.998 CC lib/nvme/nvme_pcie_common.o 00:05:25.998 CC lib/nvme/nvme_pcie.o 00:05:25.998 CC lib/nvme/nvme.o 00:05:25.998 CC lib/nvme/nvme_qpair.o 00:05:26.565 LIB libspdk_thread.a 00:05:26.565 SO libspdk_thread.so.10.1 00:05:26.565 SYMLINK libspdk_thread.so 00:05:26.565 CC lib/nvme/nvme_quirks.o 00:05:26.824 CC lib/nvme/nvme_transport.o 00:05:26.824 CC lib/nvme/nvme_discovery.o 00:05:26.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:26.824 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:26.824 CC lib/accel/accel.o 00:05:26.824 CC lib/nvme/nvme_tcp.o 00:05:26.824 CC lib/nvme/nvme_opal.o 00:05:27.083 CC lib/blob/blobstore.o 00:05:27.342 CC lib/blob/request.o 00:05:27.342 CC lib/accel/accel_rpc.o 00:05:27.342 CC lib/accel/accel_sw.o 00:05:27.342 CC lib/nvme/nvme_io_msg.o 00:05:27.601 CC lib/blob/zeroes.o 00:05:27.601 CC lib/nvme/nvme_poll_group.o 00:05:27.601 CC lib/nvme/nvme_zns.o 00:05:27.601 CC lib/blob/blob_bs_dev.o 00:05:27.860 CC lib/nvme/nvme_stubs.o 00:05:27.860 CC lib/init/json_config.o 00:05:27.860 LIB libspdk_accel.a 00:05:28.119 CC lib/virtio/virtio.o 00:05:28.119 SO libspdk_accel.so.16.0 00:05:28.119 SYMLINK libspdk_accel.so 00:05:28.119 CC lib/virtio/virtio_vhost_user.o 00:05:28.119 CC lib/virtio/virtio_vfio_user.o 00:05:28.119 CC lib/init/subsystem.o 00:05:28.119 CC lib/init/subsystem_rpc.o 00:05:28.119 CC lib/virtio/virtio_pci.o 00:05:28.378 CC lib/init/rpc.o 00:05:28.378 CC lib/nvme/nvme_auth.o 00:05:28.378 CC lib/nvme/nvme_cuse.o 00:05:28.378 CC lib/nvme/nvme_vfio_user.o 00:05:28.378 CC lib/vfu_tgt/tgt_endpoint.o 00:05:28.378 CC lib/vfu_tgt/tgt_rpc.o 00:05:28.378 CC lib/nvme/nvme_rdma.o 00:05:28.378 LIB libspdk_init.a 00:05:28.636 SO libspdk_init.so.6.0 00:05:28.636 LIB libspdk_virtio.a 00:05:28.636 SO libspdk_virtio.so.7.0 00:05:28.636 CC lib/fsdev/fsdev.o 00:05:28.636 SYMLINK libspdk_init.so 00:05:28.636 CC lib/fsdev/fsdev_io.o 00:05:28.636 SYMLINK libspdk_virtio.so 00:05:28.636 CC lib/bdev/bdev.o 00:05:28.636 LIB libspdk_vfu_tgt.a 00:05:28.636 SO libspdk_vfu_tgt.so.3.0 00:05:28.896 SYMLINK libspdk_vfu_tgt.so 00:05:28.896 CC lib/bdev/bdev_rpc.o 00:05:28.896 CC lib/event/app.o 00:05:28.896 CC lib/event/reactor.o 00:05:28.896 CC lib/event/log_rpc.o 00:05:29.155 CC lib/event/app_rpc.o 00:05:29.155 CC lib/event/scheduler_static.o 00:05:29.155 CC lib/bdev/bdev_zone.o 00:05:29.155 CC lib/bdev/part.o 00:05:29.155 CC lib/fsdev/fsdev_rpc.o 00:05:29.155 CC lib/bdev/scsi_nvme.o 00:05:29.414 LIB libspdk_fsdev.a 00:05:29.414 LIB libspdk_event.a 00:05:29.414 SO libspdk_fsdev.so.1.0 00:05:29.414 SO libspdk_event.so.14.0 00:05:29.414 SYMLINK libspdk_fsdev.so 00:05:29.414 SYMLINK libspdk_event.so 00:05:29.672 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:29.672 LIB libspdk_nvme.a 00:05:29.932 SO libspdk_nvme.so.14.0 00:05:30.191 LIB libspdk_blob.a 00:05:30.191 SO libspdk_blob.so.11.0 00:05:30.449 SYMLINK libspdk_nvme.so 00:05:30.449 LIB libspdk_fuse_dispatcher.a 00:05:30.449 SO libspdk_fuse_dispatcher.so.1.0 00:05:30.449 SYMLINK libspdk_blob.so 00:05:30.449 SYMLINK libspdk_fuse_dispatcher.so 00:05:30.707 CC lib/blobfs/blobfs.o 00:05:30.707 CC lib/blobfs/tree.o 00:05:30.707 CC lib/lvol/lvol.o 00:05:31.279 LIB libspdk_bdev.a 00:05:31.279 SO libspdk_bdev.so.16.0 00:05:31.279 SYMLINK libspdk_bdev.so 00:05:31.538 CC lib/scsi/dev.o 00:05:31.538 CC lib/scsi/lun.o 00:05:31.538 CC lib/scsi/scsi.o 00:05:31.538 CC lib/scsi/port.o 00:05:31.538 CC lib/ublk/ublk.o 00:05:31.538 CC lib/nbd/nbd.o 00:05:31.538 CC lib/ftl/ftl_core.o 00:05:31.538 CC lib/nvmf/ctrlr.o 00:05:31.538 LIB libspdk_blobfs.a 00:05:31.538 SO libspdk_blobfs.so.10.0 00:05:31.538 LIB libspdk_lvol.a 00:05:31.538 CC lib/nvmf/ctrlr_discovery.o 00:05:31.538 SYMLINK libspdk_blobfs.so 00:05:31.538 SO libspdk_lvol.so.10.0 00:05:31.538 CC lib/scsi/scsi_bdev.o 00:05:31.538 CC lib/nvmf/ctrlr_bdev.o 00:05:31.797 SYMLINK libspdk_lvol.so 00:05:31.797 CC lib/nvmf/subsystem.o 00:05:31.797 CC lib/ublk/ublk_rpc.o 00:05:31.797 CC lib/ftl/ftl_init.o 00:05:31.797 CC lib/ftl/ftl_layout.o 00:05:32.057 CC lib/ftl/ftl_debug.o 00:05:32.057 CC lib/nbd/nbd_rpc.o 00:05:32.057 CC lib/ftl/ftl_io.o 00:05:32.057 LIB libspdk_ublk.a 00:05:32.057 CC lib/ftl/ftl_sb.o 00:05:32.057 CC lib/scsi/scsi_pr.o 00:05:32.057 SO libspdk_ublk.so.3.0 00:05:32.057 LIB libspdk_nbd.a 00:05:32.057 CC lib/scsi/scsi_rpc.o 00:05:32.316 SO libspdk_nbd.so.7.0 00:05:32.316 SYMLINK libspdk_ublk.so 00:05:32.316 CC lib/nvmf/nvmf.o 00:05:32.316 CC lib/ftl/ftl_l2p.o 00:05:32.316 CC lib/ftl/ftl_l2p_flat.o 00:05:32.316 SYMLINK libspdk_nbd.so 00:05:32.316 CC lib/ftl/ftl_nv_cache.o 00:05:32.316 CC lib/ftl/ftl_band.o 00:05:32.316 CC lib/ftl/ftl_band_ops.o 00:05:32.316 CC lib/nvmf/nvmf_rpc.o 00:05:32.316 CC lib/scsi/task.o 00:05:32.316 CC lib/nvmf/transport.o 00:05:32.575 CC lib/nvmf/tcp.o 00:05:32.575 LIB libspdk_scsi.a 00:05:32.575 CC lib/ftl/ftl_writer.o 00:05:32.575 CC lib/nvmf/stubs.o 00:05:32.575 SO libspdk_scsi.so.9.0 00:05:32.834 SYMLINK libspdk_scsi.so 00:05:32.834 CC lib/nvmf/mdns_server.o 00:05:32.834 CC lib/nvmf/vfio_user.o 00:05:32.834 CC lib/ftl/ftl_rq.o 00:05:33.093 CC lib/ftl/ftl_reloc.o 00:05:33.093 CC lib/ftl/ftl_l2p_cache.o 00:05:33.093 CC lib/ftl/ftl_p2l.o 00:05:33.093 CC lib/ftl/ftl_p2l_log.o 00:05:33.093 CC lib/ftl/mngt/ftl_mngt.o 00:05:33.093 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:33.352 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:33.352 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:33.352 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:33.611 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:33.611 CC lib/nvmf/rdma.o 00:05:33.611 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:33.611 CC lib/nvmf/auth.o 00:05:33.611 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:33.611 CC lib/iscsi/conn.o 00:05:33.611 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:33.611 CC lib/vhost/vhost.o 00:05:33.870 CC lib/vhost/vhost_rpc.o 00:05:33.870 CC lib/vhost/vhost_scsi.o 00:05:33.870 CC lib/iscsi/init_grp.o 00:05:33.870 CC lib/vhost/vhost_blk.o 00:05:34.129 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:34.129 CC lib/iscsi/iscsi.o 00:05:34.388 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:34.388 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:34.388 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:34.388 CC lib/ftl/utils/ftl_conf.o 00:05:34.388 CC lib/vhost/rte_vhost_user.o 00:05:34.388 CC lib/ftl/utils/ftl_md.o 00:05:34.647 CC lib/ftl/utils/ftl_mempool.o 00:05:34.647 CC lib/ftl/utils/ftl_bitmap.o 00:05:34.647 CC lib/ftl/utils/ftl_property.o 00:05:34.647 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:34.647 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:34.647 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:34.906 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:34.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:34.906 CC lib/iscsi/param.o 00:05:34.906 CC lib/iscsi/portal_grp.o 00:05:34.906 CC lib/iscsi/tgt_node.o 00:05:34.906 CC lib/iscsi/iscsi_subsystem.o 00:05:34.907 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:34.907 CC lib/iscsi/iscsi_rpc.o 00:05:35.166 CC lib/iscsi/task.o 00:05:35.166 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:35.166 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:35.166 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:35.166 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:35.427 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:35.427 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:35.427 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:35.427 CC lib/ftl/base/ftl_base_dev.o 00:05:35.427 CC lib/ftl/base/ftl_base_bdev.o 00:05:35.427 LIB libspdk_iscsi.a 00:05:35.427 LIB libspdk_nvmf.a 00:05:35.427 CC lib/ftl/ftl_trace.o 00:05:35.427 LIB libspdk_vhost.a 00:05:35.427 SO libspdk_iscsi.so.8.0 00:05:35.686 SO libspdk_nvmf.so.19.0 00:05:35.686 SO libspdk_vhost.so.8.0 00:05:35.686 SYMLINK libspdk_iscsi.so 00:05:35.686 SYMLINK libspdk_vhost.so 00:05:35.686 LIB libspdk_ftl.a 00:05:35.686 SYMLINK libspdk_nvmf.so 00:05:35.945 SO libspdk_ftl.so.9.0 00:05:36.204 SYMLINK libspdk_ftl.so 00:05:36.462 CC module/env_dpdk/env_dpdk_rpc.o 00:05:36.721 CC module/vfu_device/vfu_virtio.o 00:05:36.721 CC module/keyring/file/keyring.o 00:05:36.721 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:36.721 CC module/fsdev/aio/fsdev_aio.o 00:05:36.721 CC module/accel/ioat/accel_ioat.o 00:05:36.721 CC module/sock/posix/posix.o 00:05:36.721 CC module/blob/bdev/blob_bdev.o 00:05:36.721 CC module/accel/error/accel_error.o 00:05:36.721 CC module/keyring/linux/keyring.o 00:05:36.721 LIB libspdk_env_dpdk_rpc.a 00:05:36.721 SO libspdk_env_dpdk_rpc.so.6.0 00:05:36.721 SYMLINK libspdk_env_dpdk_rpc.so 00:05:36.721 CC module/keyring/linux/keyring_rpc.o 00:05:36.980 CC module/keyring/file/keyring_rpc.o 00:05:36.980 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:36.980 CC module/accel/ioat/accel_ioat_rpc.o 00:05:36.980 LIB libspdk_scheduler_dynamic.a 00:05:36.980 CC module/accel/error/accel_error_rpc.o 00:05:36.980 SO libspdk_scheduler_dynamic.so.4.0 00:05:36.980 LIB libspdk_blob_bdev.a 00:05:36.980 SYMLINK libspdk_scheduler_dynamic.so 00:05:36.980 LIB libspdk_keyring_linux.a 00:05:36.980 SO libspdk_blob_bdev.so.11.0 00:05:36.980 LIB libspdk_keyring_file.a 00:05:36.980 SO libspdk_keyring_linux.so.1.0 00:05:36.980 LIB libspdk_accel_ioat.a 00:05:36.980 SO libspdk_keyring_file.so.2.0 00:05:36.980 LIB libspdk_accel_error.a 00:05:36.980 SYMLINK libspdk_blob_bdev.so 00:05:36.980 SO libspdk_accel_ioat.so.6.0 00:05:36.980 SYMLINK libspdk_keyring_linux.so 00:05:36.980 SO libspdk_accel_error.so.2.0 00:05:36.980 SYMLINK libspdk_keyring_file.so 00:05:37.239 SYMLINK libspdk_accel_ioat.so 00:05:37.239 CC module/vfu_device/vfu_virtio_blk.o 00:05:37.239 SYMLINK libspdk_accel_error.so 00:05:37.239 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:37.239 CC module/fsdev/aio/linux_aio_mgr.o 00:05:37.239 CC module/scheduler/gscheduler/gscheduler.o 00:05:37.239 CC module/accel/dsa/accel_dsa.o 00:05:37.239 CC module/accel/iaa/accel_iaa.o 00:05:37.239 CC module/accel/iaa/accel_iaa_rpc.o 00:05:37.239 LIB libspdk_scheduler_dpdk_governor.a 00:05:37.239 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:37.239 LIB libspdk_scheduler_gscheduler.a 00:05:37.498 CC module/bdev/error/vbdev_error.o 00:05:37.498 LIB libspdk_fsdev_aio.a 00:05:37.498 CC module/bdev/delay/vbdev_delay.o 00:05:37.498 SO libspdk_scheduler_gscheduler.so.4.0 00:05:37.498 SO libspdk_fsdev_aio.so.1.0 00:05:37.498 LIB libspdk_sock_posix.a 00:05:37.498 CC module/vfu_device/vfu_virtio_scsi.o 00:05:37.498 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:37.498 SO libspdk_sock_posix.so.6.0 00:05:37.498 CC module/bdev/error/vbdev_error_rpc.o 00:05:37.498 SYMLINK libspdk_scheduler_gscheduler.so 00:05:37.498 LIB libspdk_accel_iaa.a 00:05:37.498 SYMLINK libspdk_fsdev_aio.so 00:05:37.498 CC module/accel/dsa/accel_dsa_rpc.o 00:05:37.498 SO libspdk_accel_iaa.so.3.0 00:05:37.498 SYMLINK libspdk_sock_posix.so 00:05:37.498 SYMLINK libspdk_accel_iaa.so 00:05:37.498 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:37.757 CC module/bdev/gpt/gpt.o 00:05:37.757 LIB libspdk_bdev_error.a 00:05:37.757 LIB libspdk_accel_dsa.a 00:05:37.757 SO libspdk_bdev_error.so.6.0 00:05:37.757 SO libspdk_accel_dsa.so.5.0 00:05:37.757 CC module/bdev/lvol/vbdev_lvol.o 00:05:37.757 CC module/bdev/malloc/bdev_malloc.o 00:05:37.757 CC module/blobfs/bdev/blobfs_bdev.o 00:05:37.757 SYMLINK libspdk_bdev_error.so 00:05:37.757 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:37.757 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:37.757 SYMLINK libspdk_accel_dsa.so 00:05:37.757 LIB libspdk_bdev_delay.a 00:05:37.757 CC module/vfu_device/vfu_virtio_rpc.o 00:05:37.757 CC module/bdev/null/bdev_null.o 00:05:37.757 SO libspdk_bdev_delay.so.6.0 00:05:37.757 CC module/bdev/gpt/vbdev_gpt.o 00:05:37.757 SYMLINK libspdk_bdev_delay.so 00:05:37.757 CC module/bdev/null/bdev_null_rpc.o 00:05:38.016 CC module/bdev/nvme/bdev_nvme.o 00:05:38.016 CC module/vfu_device/vfu_virtio_fs.o 00:05:38.016 LIB libspdk_blobfs_bdev.a 00:05:38.016 SO libspdk_blobfs_bdev.so.6.0 00:05:38.016 SYMLINK libspdk_blobfs_bdev.so 00:05:38.016 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:38.016 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:38.016 LIB libspdk_bdev_null.a 00:05:38.016 SO libspdk_bdev_null.so.6.0 00:05:38.016 LIB libspdk_bdev_gpt.a 00:05:38.275 CC module/bdev/nvme/nvme_rpc.o 00:05:38.275 LIB libspdk_vfu_device.a 00:05:38.275 SYMLINK libspdk_bdev_null.so 00:05:38.275 SO libspdk_bdev_gpt.so.6.0 00:05:38.275 SO libspdk_vfu_device.so.3.0 00:05:38.275 LIB libspdk_bdev_lvol.a 00:05:38.275 SYMLINK libspdk_bdev_gpt.so 00:05:38.275 LIB libspdk_bdev_malloc.a 00:05:38.275 CC module/bdev/passthru/vbdev_passthru.o 00:05:38.275 SO libspdk_bdev_lvol.so.6.0 00:05:38.275 SO libspdk_bdev_malloc.so.6.0 00:05:38.275 CC module/bdev/raid/bdev_raid.o 00:05:38.275 SYMLINK libspdk_vfu_device.so 00:05:38.275 CC module/bdev/raid/bdev_raid_rpc.o 00:05:38.275 SYMLINK libspdk_bdev_lvol.so 00:05:38.275 CC module/bdev/nvme/bdev_mdns_client.o 00:05:38.275 CC module/bdev/split/vbdev_split.o 00:05:38.275 SYMLINK libspdk_bdev_malloc.so 00:05:38.275 CC module/bdev/nvme/vbdev_opal.o 00:05:38.534 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:38.534 CC module/bdev/raid/bdev_raid_sb.o 00:05:38.534 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:38.534 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:38.534 CC module/bdev/split/vbdev_split_rpc.o 00:05:38.534 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:38.534 CC module/bdev/raid/raid0.o 00:05:38.793 LIB libspdk_bdev_passthru.a 00:05:38.793 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:38.793 LIB libspdk_bdev_split.a 00:05:38.793 SO libspdk_bdev_passthru.so.6.0 00:05:38.793 CC module/bdev/aio/bdev_aio.o 00:05:38.793 SO libspdk_bdev_split.so.6.0 00:05:38.793 SYMLINK libspdk_bdev_passthru.so 00:05:38.793 CC module/bdev/raid/raid1.o 00:05:38.793 CC module/bdev/raid/concat.o 00:05:38.793 SYMLINK libspdk_bdev_split.so 00:05:38.793 CC module/bdev/ftl/bdev_ftl.o 00:05:38.793 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:39.052 LIB libspdk_bdev_zone_block.a 00:05:39.052 CC module/bdev/iscsi/bdev_iscsi.o 00:05:39.052 SO libspdk_bdev_zone_block.so.6.0 00:05:39.052 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:39.052 SYMLINK libspdk_bdev_zone_block.so 00:05:39.052 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:39.052 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:39.052 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:39.052 CC module/bdev/aio/bdev_aio_rpc.o 00:05:39.312 LIB libspdk_bdev_ftl.a 00:05:39.312 SO libspdk_bdev_ftl.so.6.0 00:05:39.312 LIB libspdk_bdev_raid.a 00:05:39.312 SYMLINK libspdk_bdev_ftl.so 00:05:39.312 LIB libspdk_bdev_aio.a 00:05:39.312 LIB libspdk_bdev_iscsi.a 00:05:39.312 SO libspdk_bdev_raid.so.6.0 00:05:39.312 SO libspdk_bdev_aio.so.6.0 00:05:39.312 SO libspdk_bdev_iscsi.so.6.0 00:05:39.312 SYMLINK libspdk_bdev_iscsi.so 00:05:39.312 SYMLINK libspdk_bdev_aio.so 00:05:39.312 SYMLINK libspdk_bdev_raid.so 00:05:39.573 LIB libspdk_bdev_virtio.a 00:05:39.573 SO libspdk_bdev_virtio.so.6.0 00:05:39.573 SYMLINK libspdk_bdev_virtio.so 00:05:40.140 LIB libspdk_bdev_nvme.a 00:05:40.399 SO libspdk_bdev_nvme.so.7.0 00:05:40.399 SYMLINK libspdk_bdev_nvme.so 00:05:40.968 CC module/event/subsystems/sock/sock.o 00:05:40.968 CC module/event/subsystems/iobuf/iobuf.o 00:05:40.968 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:40.968 CC module/event/subsystems/keyring/keyring.o 00:05:40.968 CC module/event/subsystems/scheduler/scheduler.o 00:05:40.968 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:40.968 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:40.968 CC module/event/subsystems/vmd/vmd.o 00:05:40.968 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:40.968 CC module/event/subsystems/fsdev/fsdev.o 00:05:40.968 LIB libspdk_event_scheduler.a 00:05:40.968 LIB libspdk_event_vfu_tgt.a 00:05:40.968 SO libspdk_event_vfu_tgt.so.3.0 00:05:40.968 SO libspdk_event_scheduler.so.4.0 00:05:40.968 LIB libspdk_event_keyring.a 00:05:40.968 LIB libspdk_event_sock.a 00:05:40.968 LIB libspdk_event_vmd.a 00:05:40.968 LIB libspdk_event_iobuf.a 00:05:40.968 LIB libspdk_event_fsdev.a 00:05:40.968 LIB libspdk_event_vhost_blk.a 00:05:40.968 SO libspdk_event_keyring.so.1.0 00:05:40.968 SO libspdk_event_sock.so.5.0 00:05:41.227 SO libspdk_event_vmd.so.6.0 00:05:41.227 SO libspdk_event_iobuf.so.3.0 00:05:41.227 SO libspdk_event_fsdev.so.1.0 00:05:41.227 SYMLINK libspdk_event_scheduler.so 00:05:41.227 SYMLINK libspdk_event_vfu_tgt.so 00:05:41.227 SO libspdk_event_vhost_blk.so.3.0 00:05:41.227 SYMLINK libspdk_event_keyring.so 00:05:41.227 SYMLINK libspdk_event_vmd.so 00:05:41.227 SYMLINK libspdk_event_sock.so 00:05:41.227 SYMLINK libspdk_event_iobuf.so 00:05:41.227 SYMLINK libspdk_event_fsdev.so 00:05:41.227 SYMLINK libspdk_event_vhost_blk.so 00:05:41.494 CC module/event/subsystems/accel/accel.o 00:05:41.494 LIB libspdk_event_accel.a 00:05:41.784 SO libspdk_event_accel.so.6.0 00:05:41.784 SYMLINK libspdk_event_accel.so 00:05:42.052 CC module/event/subsystems/bdev/bdev.o 00:05:42.052 LIB libspdk_event_bdev.a 00:05:42.310 SO libspdk_event_bdev.so.6.0 00:05:42.311 SYMLINK libspdk_event_bdev.so 00:05:42.569 CC module/event/subsystems/ublk/ublk.o 00:05:42.569 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:42.569 CC module/event/subsystems/nbd/nbd.o 00:05:42.569 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:42.569 CC module/event/subsystems/scsi/scsi.o 00:05:42.569 LIB libspdk_event_ublk.a 00:05:42.569 LIB libspdk_event_nbd.a 00:05:42.828 LIB libspdk_event_scsi.a 00:05:42.828 SO libspdk_event_ublk.so.3.0 00:05:42.828 SO libspdk_event_nbd.so.6.0 00:05:42.828 SO libspdk_event_scsi.so.6.0 00:05:42.828 SYMLINK libspdk_event_ublk.so 00:05:42.828 SYMLINK libspdk_event_nbd.so 00:05:42.828 SYMLINK libspdk_event_scsi.so 00:05:42.828 LIB libspdk_event_nvmf.a 00:05:42.828 SO libspdk_event_nvmf.so.6.0 00:05:42.828 SYMLINK libspdk_event_nvmf.so 00:05:43.086 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:43.086 CC module/event/subsystems/iscsi/iscsi.o 00:05:43.345 LIB libspdk_event_vhost_scsi.a 00:05:43.345 SO libspdk_event_vhost_scsi.so.3.0 00:05:43.345 LIB libspdk_event_iscsi.a 00:05:43.345 SO libspdk_event_iscsi.so.6.0 00:05:43.345 SYMLINK libspdk_event_vhost_scsi.so 00:05:43.345 SYMLINK libspdk_event_iscsi.so 00:05:43.603 SO libspdk.so.6.0 00:05:43.603 SYMLINK libspdk.so 00:05:43.862 CXX app/trace/trace.o 00:05:43.862 TEST_HEADER include/spdk/accel.h 00:05:43.862 TEST_HEADER include/spdk/accel_module.h 00:05:43.862 TEST_HEADER include/spdk/assert.h 00:05:43.862 TEST_HEADER include/spdk/barrier.h 00:05:43.862 TEST_HEADER include/spdk/base64.h 00:05:43.862 TEST_HEADER include/spdk/bdev.h 00:05:43.862 TEST_HEADER include/spdk/bdev_module.h 00:05:43.862 TEST_HEADER include/spdk/bdev_zone.h 00:05:43.862 TEST_HEADER include/spdk/bit_array.h 00:05:43.862 TEST_HEADER include/spdk/bit_pool.h 00:05:43.862 TEST_HEADER include/spdk/blob_bdev.h 00:05:43.862 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:43.862 TEST_HEADER include/spdk/blobfs.h 00:05:43.862 TEST_HEADER include/spdk/blob.h 00:05:43.862 TEST_HEADER include/spdk/conf.h 00:05:43.862 TEST_HEADER include/spdk/config.h 00:05:43.862 TEST_HEADER include/spdk/cpuset.h 00:05:43.862 TEST_HEADER include/spdk/crc16.h 00:05:43.862 TEST_HEADER include/spdk/crc32.h 00:05:43.862 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:43.862 TEST_HEADER include/spdk/crc64.h 00:05:43.862 TEST_HEADER include/spdk/dif.h 00:05:43.862 TEST_HEADER include/spdk/dma.h 00:05:43.862 TEST_HEADER include/spdk/endian.h 00:05:43.862 TEST_HEADER include/spdk/env_dpdk.h 00:05:43.862 TEST_HEADER include/spdk/env.h 00:05:43.862 TEST_HEADER include/spdk/event.h 00:05:43.862 TEST_HEADER include/spdk/fd_group.h 00:05:43.862 TEST_HEADER include/spdk/fd.h 00:05:43.862 TEST_HEADER include/spdk/file.h 00:05:43.862 TEST_HEADER include/spdk/fsdev.h 00:05:43.862 TEST_HEADER include/spdk/fsdev_module.h 00:05:43.862 TEST_HEADER include/spdk/ftl.h 00:05:43.862 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:43.862 TEST_HEADER include/spdk/gpt_spec.h 00:05:43.862 TEST_HEADER include/spdk/hexlify.h 00:05:43.862 TEST_HEADER include/spdk/histogram_data.h 00:05:43.862 CC examples/ioat/perf/perf.o 00:05:43.862 TEST_HEADER include/spdk/idxd.h 00:05:43.862 TEST_HEADER include/spdk/idxd_spec.h 00:05:43.862 TEST_HEADER include/spdk/init.h 00:05:43.862 CC examples/util/zipf/zipf.o 00:05:43.862 TEST_HEADER include/spdk/ioat.h 00:05:43.862 CC test/thread/poller_perf/poller_perf.o 00:05:43.862 TEST_HEADER include/spdk/ioat_spec.h 00:05:43.862 TEST_HEADER include/spdk/iscsi_spec.h 00:05:43.862 TEST_HEADER include/spdk/json.h 00:05:43.862 TEST_HEADER include/spdk/jsonrpc.h 00:05:43.862 TEST_HEADER include/spdk/keyring.h 00:05:43.862 TEST_HEADER include/spdk/keyring_module.h 00:05:43.862 TEST_HEADER include/spdk/likely.h 00:05:43.862 TEST_HEADER include/spdk/log.h 00:05:43.862 TEST_HEADER include/spdk/lvol.h 00:05:43.862 TEST_HEADER include/spdk/md5.h 00:05:43.862 CC test/dma/test_dma/test_dma.o 00:05:43.862 TEST_HEADER include/spdk/memory.h 00:05:43.862 TEST_HEADER include/spdk/mmio.h 00:05:43.863 TEST_HEADER include/spdk/nbd.h 00:05:43.863 TEST_HEADER include/spdk/net.h 00:05:43.863 TEST_HEADER include/spdk/notify.h 00:05:43.863 TEST_HEADER include/spdk/nvme.h 00:05:43.863 TEST_HEADER include/spdk/nvme_intel.h 00:05:43.863 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:43.863 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:43.863 CC test/app/bdev_svc/bdev_svc.o 00:05:43.863 TEST_HEADER include/spdk/nvme_spec.h 00:05:43.863 TEST_HEADER include/spdk/nvme_zns.h 00:05:43.863 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:43.863 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:43.863 TEST_HEADER include/spdk/nvmf.h 00:05:43.863 TEST_HEADER include/spdk/nvmf_spec.h 00:05:43.863 TEST_HEADER include/spdk/nvmf_transport.h 00:05:43.863 TEST_HEADER include/spdk/opal.h 00:05:43.863 TEST_HEADER include/spdk/opal_spec.h 00:05:43.863 TEST_HEADER include/spdk/pci_ids.h 00:05:43.863 TEST_HEADER include/spdk/pipe.h 00:05:43.863 TEST_HEADER include/spdk/queue.h 00:05:43.863 TEST_HEADER include/spdk/reduce.h 00:05:43.863 TEST_HEADER include/spdk/rpc.h 00:05:43.863 TEST_HEADER include/spdk/scheduler.h 00:05:43.863 TEST_HEADER include/spdk/scsi.h 00:05:44.121 TEST_HEADER include/spdk/scsi_spec.h 00:05:44.121 CC test/env/mem_callbacks/mem_callbacks.o 00:05:44.121 TEST_HEADER include/spdk/sock.h 00:05:44.121 TEST_HEADER include/spdk/stdinc.h 00:05:44.121 TEST_HEADER include/spdk/string.h 00:05:44.122 TEST_HEADER include/spdk/thread.h 00:05:44.122 TEST_HEADER include/spdk/trace.h 00:05:44.122 TEST_HEADER include/spdk/trace_parser.h 00:05:44.122 TEST_HEADER include/spdk/tree.h 00:05:44.122 TEST_HEADER include/spdk/ublk.h 00:05:44.122 TEST_HEADER include/spdk/util.h 00:05:44.122 TEST_HEADER include/spdk/uuid.h 00:05:44.122 TEST_HEADER include/spdk/version.h 00:05:44.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:44.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:44.122 TEST_HEADER include/spdk/vhost.h 00:05:44.122 TEST_HEADER include/spdk/vmd.h 00:05:44.122 TEST_HEADER include/spdk/xor.h 00:05:44.122 TEST_HEADER include/spdk/zipf.h 00:05:44.122 CXX test/cpp_headers/accel.o 00:05:44.122 LINK interrupt_tgt 00:05:44.122 LINK zipf 00:05:44.122 LINK poller_perf 00:05:44.122 LINK ioat_perf 00:05:44.122 LINK bdev_svc 00:05:44.122 LINK spdk_trace 00:05:44.122 CXX test/cpp_headers/accel_module.o 00:05:44.380 CC test/rpc_client/rpc_client_test.o 00:05:44.381 CC test/app/histogram_perf/histogram_perf.o 00:05:44.381 CC examples/ioat/verify/verify.o 00:05:44.381 CXX test/cpp_headers/assert.o 00:05:44.639 CC app/trace_record/trace_record.o 00:05:44.639 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:44.639 LINK test_dma 00:05:44.639 LINK rpc_client_test 00:05:44.639 LINK histogram_perf 00:05:44.639 CC app/nvmf_tgt/nvmf_main.o 00:05:44.639 LINK mem_callbacks 00:05:44.639 LINK verify 00:05:44.639 CXX test/cpp_headers/barrier.o 00:05:44.898 LINK nvmf_tgt 00:05:44.898 CC test/env/vtophys/vtophys.o 00:05:44.898 LINK spdk_trace_record 00:05:44.898 CXX test/cpp_headers/base64.o 00:05:44.898 LINK nvme_fuzz 00:05:44.898 CC test/event/event_perf/event_perf.o 00:05:44.898 CC test/event/reactor/reactor.o 00:05:44.898 CC examples/thread/thread/thread_ex.o 00:05:44.898 CC examples/sock/hello_world/hello_sock.o 00:05:44.898 LINK vtophys 00:05:45.157 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:45.157 LINK reactor 00:05:45.157 LINK event_perf 00:05:45.157 CXX test/cpp_headers/bdev.o 00:05:45.157 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:45.415 LINK hello_sock 00:05:45.415 CC app/iscsi_tgt/iscsi_tgt.o 00:05:45.415 LINK thread 00:05:45.415 LINK env_dpdk_post_init 00:05:45.415 CXX test/cpp_headers/bdev_module.o 00:05:45.415 CC test/event/reactor_perf/reactor_perf.o 00:05:45.415 CC test/accel/dif/dif.o 00:05:45.674 LINK iscsi_tgt 00:05:45.674 CC test/blobfs/mkfs/mkfs.o 00:05:45.674 CXX test/cpp_headers/bdev_zone.o 00:05:45.674 CC test/env/memory/memory_ut.o 00:05:45.674 LINK reactor_perf 00:05:45.674 CC test/lvol/esnap/esnap.o 00:05:45.674 CC test/nvme/aer/aer.o 00:05:45.674 CXX test/cpp_headers/bit_array.o 00:05:45.674 LINK mkfs 00:05:45.933 CC test/event/app_repeat/app_repeat.o 00:05:45.933 CC app/spdk_tgt/spdk_tgt.o 00:05:45.933 CXX test/cpp_headers/bit_pool.o 00:05:45.933 LINK aer 00:05:46.191 LINK app_repeat 00:05:46.191 CC test/nvme/reset/reset.o 00:05:46.191 CXX test/cpp_headers/blob_bdev.o 00:05:46.191 LINK dif 00:05:46.191 LINK spdk_tgt 00:05:46.191 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:46.191 CXX test/cpp_headers/blobfs_bdev.o 00:05:46.450 CC test/event/scheduler/scheduler.o 00:05:46.450 LINK reset 00:05:46.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:46.450 CC test/app/jsoncat/jsoncat.o 00:05:46.450 CXX test/cpp_headers/blobfs.o 00:05:46.709 CC app/spdk_lspci/spdk_lspci.o 00:05:46.709 LINK jsoncat 00:05:46.709 CC test/nvme/sgl/sgl.o 00:05:46.709 LINK scheduler 00:05:46.709 CXX test/cpp_headers/blob.o 00:05:46.709 LINK spdk_lspci 00:05:46.968 LINK memory_ut 00:05:46.968 LINK iscsi_fuzz 00:05:46.968 CC test/nvme/e2edp/nvme_dp.o 00:05:46.968 LINK vhost_fuzz 00:05:46.968 CXX test/cpp_headers/conf.o 00:05:46.968 LINK sgl 00:05:46.968 CC test/nvme/overhead/overhead.o 00:05:47.227 CXX test/cpp_headers/config.o 00:05:47.227 CXX test/cpp_headers/cpuset.o 00:05:47.227 CC test/env/pci/pci_ut.o 00:05:47.227 CC app/spdk_nvme_perf/perf.o 00:05:47.227 CC test/app/stub/stub.o 00:05:47.227 CC app/spdk_nvme_identify/identify.o 00:05:47.227 LINK nvme_dp 00:05:47.227 CC app/spdk_nvme_discover/discovery_aer.o 00:05:47.486 CXX test/cpp_headers/crc16.o 00:05:47.486 LINK overhead 00:05:47.486 LINK stub 00:05:47.486 CXX test/cpp_headers/crc32.o 00:05:47.486 LINK spdk_nvme_discover 00:05:47.486 CC app/spdk_top/spdk_top.o 00:05:47.486 CC test/nvme/err_injection/err_injection.o 00:05:47.745 LINK pci_ut 00:05:47.745 CXX test/cpp_headers/crc64.o 00:05:47.745 LINK err_injection 00:05:48.004 CC test/nvme/startup/startup.o 00:05:48.004 CXX test/cpp_headers/dif.o 00:05:48.004 CC app/vhost/vhost.o 00:05:48.263 CXX test/cpp_headers/dma.o 00:05:48.263 LINK startup 00:05:48.263 LINK spdk_nvme_identify 00:05:48.263 LINK spdk_nvme_perf 00:05:48.263 CC test/bdev/bdevio/bdevio.o 00:05:48.263 LINK vhost 00:05:48.522 CXX test/cpp_headers/endian.o 00:05:48.522 CC examples/vmd/lsvmd/lsvmd.o 00:05:48.781 CC test/nvme/reserve/reserve.o 00:05:48.781 LINK spdk_top 00:05:48.781 CXX test/cpp_headers/env_dpdk.o 00:05:48.781 LINK lsvmd 00:05:48.781 CC test/nvme/simple_copy/simple_copy.o 00:05:48.781 CC test/nvme/connect_stress/connect_stress.o 00:05:48.781 LINK bdevio 00:05:48.781 CC app/spdk_dd/spdk_dd.o 00:05:49.040 CXX test/cpp_headers/env.o 00:05:49.040 LINK connect_stress 00:05:49.040 LINK reserve 00:05:49.040 LINK simple_copy 00:05:49.040 CC examples/vmd/led/led.o 00:05:49.040 CC app/fio/nvme/fio_plugin.o 00:05:49.040 CXX test/cpp_headers/event.o 00:05:49.040 CXX test/cpp_headers/fd_group.o 00:05:49.299 LINK led 00:05:49.299 CC test/nvme/boot_partition/boot_partition.o 00:05:49.299 CC test/nvme/compliance/nvme_compliance.o 00:05:49.299 CC app/fio/bdev/fio_plugin.o 00:05:49.299 LINK spdk_dd 00:05:49.299 CXX test/cpp_headers/fd.o 00:05:49.557 LINK boot_partition 00:05:49.557 CC test/nvme/fused_ordering/fused_ordering.o 00:05:49.557 CXX test/cpp_headers/file.o 00:05:49.557 CXX test/cpp_headers/fsdev.o 00:05:49.557 LINK nvme_compliance 00:05:49.557 CC examples/idxd/perf/perf.o 00:05:49.557 LINK spdk_nvme 00:05:49.816 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:49.816 LINK fused_ordering 00:05:49.816 CXX test/cpp_headers/fsdev_module.o 00:05:49.816 CXX test/cpp_headers/ftl.o 00:05:49.816 CC test/nvme/fdp/fdp.o 00:05:49.816 LINK spdk_bdev 00:05:49.816 LINK doorbell_aers 00:05:49.816 CXX test/cpp_headers/fuse_dispatcher.o 00:05:50.075 LINK idxd_perf 00:05:50.075 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:50.075 CC test/nvme/cuse/cuse.o 00:05:50.075 CC examples/accel/perf/accel_perf.o 00:05:50.075 CXX test/cpp_headers/gpt_spec.o 00:05:50.075 LINK fdp 00:05:50.075 CC examples/blob/hello_world/hello_blob.o 00:05:50.335 CC examples/blob/cli/blobcli.o 00:05:50.335 CC examples/nvme/hello_world/hello_world.o 00:05:50.335 LINK hello_fsdev 00:05:50.335 CXX test/cpp_headers/hexlify.o 00:05:50.335 LINK hello_blob 00:05:50.335 CC examples/nvme/reconnect/reconnect.o 00:05:50.335 CXX test/cpp_headers/histogram_data.o 00:05:50.594 LINK hello_world 00:05:50.594 CXX test/cpp_headers/idxd.o 00:05:50.594 LINK accel_perf 00:05:50.594 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:50.594 CXX test/cpp_headers/idxd_spec.o 00:05:50.853 LINK blobcli 00:05:50.853 CC examples/nvme/arbitration/arbitration.o 00:05:50.854 LINK reconnect 00:05:50.854 CC examples/nvme/hotplug/hotplug.o 00:05:50.854 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:50.854 CXX test/cpp_headers/init.o 00:05:51.113 CC examples/nvme/abort/abort.o 00:05:51.113 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:51.113 LINK nvme_manage 00:05:51.113 CXX test/cpp_headers/ioat.o 00:05:51.113 LINK cmb_copy 00:05:51.113 CXX test/cpp_headers/ioat_spec.o 00:05:51.113 LINK hotplug 00:05:51.113 LINK arbitration 00:05:51.113 LINK esnap 00:05:51.113 CXX test/cpp_headers/iscsi_spec.o 00:05:51.372 LINK pmr_persistence 00:05:51.372 CXX test/cpp_headers/json.o 00:05:51.372 CXX test/cpp_headers/jsonrpc.o 00:05:51.372 CXX test/cpp_headers/keyring.o 00:05:51.372 CXX test/cpp_headers/keyring_module.o 00:05:51.372 CXX test/cpp_headers/likely.o 00:05:51.372 LINK cuse 00:05:51.372 CXX test/cpp_headers/log.o 00:05:51.372 CXX test/cpp_headers/lvol.o 00:05:51.372 CXX test/cpp_headers/md5.o 00:05:51.372 CXX test/cpp_headers/memory.o 00:05:51.372 CXX test/cpp_headers/mmio.o 00:05:51.372 LINK abort 00:05:51.631 CXX test/cpp_headers/nbd.o 00:05:51.631 CXX test/cpp_headers/net.o 00:05:51.631 CXX test/cpp_headers/notify.o 00:05:51.631 CXX test/cpp_headers/nvme.o 00:05:51.631 CXX test/cpp_headers/nvme_intel.o 00:05:51.631 CXX test/cpp_headers/nvme_ocssd.o 00:05:51.631 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:51.631 CXX test/cpp_headers/nvme_spec.o 00:05:51.631 CC examples/bdev/hello_world/hello_bdev.o 00:05:51.631 CC examples/bdev/bdevperf/bdevperf.o 00:05:51.631 CXX test/cpp_headers/nvme_zns.o 00:05:51.631 CXX test/cpp_headers/nvmf_cmd.o 00:05:51.631 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:51.631 CXX test/cpp_headers/nvmf.o 00:05:51.889 CXX test/cpp_headers/nvmf_spec.o 00:05:51.889 CXX test/cpp_headers/nvmf_transport.o 00:05:51.889 CXX test/cpp_headers/opal.o 00:05:51.890 CXX test/cpp_headers/opal_spec.o 00:05:51.890 CXX test/cpp_headers/pci_ids.o 00:05:51.890 LINK hello_bdev 00:05:51.890 CXX test/cpp_headers/pipe.o 00:05:51.890 CXX test/cpp_headers/queue.o 00:05:51.890 CXX test/cpp_headers/reduce.o 00:05:51.890 CXX test/cpp_headers/rpc.o 00:05:51.890 CXX test/cpp_headers/scheduler.o 00:05:51.890 CXX test/cpp_headers/scsi.o 00:05:51.890 CXX test/cpp_headers/scsi_spec.o 00:05:52.149 CXX test/cpp_headers/sock.o 00:05:52.149 CXX test/cpp_headers/stdinc.o 00:05:52.149 CXX test/cpp_headers/string.o 00:05:52.149 CXX test/cpp_headers/thread.o 00:05:52.149 CXX test/cpp_headers/trace.o 00:05:52.149 CXX test/cpp_headers/trace_parser.o 00:05:52.149 CXX test/cpp_headers/tree.o 00:05:52.149 CXX test/cpp_headers/ublk.o 00:05:52.149 CXX test/cpp_headers/util.o 00:05:52.149 CXX test/cpp_headers/uuid.o 00:05:52.149 CXX test/cpp_headers/version.o 00:05:52.149 CXX test/cpp_headers/vfio_user_pci.o 00:05:52.149 CXX test/cpp_headers/vfio_user_spec.o 00:05:52.149 CXX test/cpp_headers/vhost.o 00:05:52.149 CXX test/cpp_headers/vmd.o 00:05:52.149 CXX test/cpp_headers/xor.o 00:05:52.408 CXX test/cpp_headers/zipf.o 00:05:52.408 LINK bdevperf 00:05:52.975 CC examples/nvmf/nvmf/nvmf.o 00:05:53.233 LINK nvmf 00:05:53.492 00:05:53.492 real 1m22.544s 00:05:53.492 user 6m54.712s 00:05:53.492 sys 1m23.441s 00:05:53.492 18:26:09 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:53.492 ************************************ 00:05:53.492 END TEST make 00:05:53.492 ************************************ 00:05:53.492 18:26:09 make -- common/autotest_common.sh@10 -- $ set +x 00:05:53.492 18:26:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:53.492 18:26:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:53.492 18:26:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:53.492 18:26:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:53.492 18:26:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:53.492 18:26:09 -- pm/common@44 -- $ pid=6034 00:05:53.492 18:26:09 -- pm/common@50 -- $ kill -TERM 6034 00:05:53.492 18:26:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:53.492 18:26:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:53.492 18:26:09 -- pm/common@44 -- $ pid=6035 00:05:53.492 18:26:09 -- pm/common@50 -- $ kill -TERM 6035 00:05:53.492 18:26:09 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.492 18:26:09 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.492 18:26:09 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.492 18:26:09 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.492 18:26:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.492 18:26:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.492 18:26:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.492 18:26:09 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.492 18:26:09 -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.492 18:26:09 -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.492 18:26:09 -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.492 18:26:09 -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.492 18:26:09 -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.492 18:26:09 -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.492 18:26:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.492 18:26:09 -- scripts/common.sh@344 -- # case "$op" in 00:05:53.492 18:26:09 -- scripts/common.sh@345 -- # : 1 00:05:53.492 18:26:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.492 18:26:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.492 18:26:09 -- scripts/common.sh@365 -- # decimal 1 00:05:53.492 18:26:09 -- scripts/common.sh@353 -- # local d=1 00:05:53.492 18:26:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.492 18:26:09 -- scripts/common.sh@355 -- # echo 1 00:05:53.492 18:26:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.492 18:26:09 -- scripts/common.sh@366 -- # decimal 2 00:05:53.492 18:26:09 -- scripts/common.sh@353 -- # local d=2 00:05:53.492 18:26:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.492 18:26:09 -- scripts/common.sh@355 -- # echo 2 00:05:53.492 18:26:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.492 18:26:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.492 18:26:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.492 18:26:09 -- scripts/common.sh@368 -- # return 0 00:05:53.751 18:26:09 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.751 18:26:09 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.751 --rc genhtml_branch_coverage=1 00:05:53.751 --rc genhtml_function_coverage=1 00:05:53.751 --rc genhtml_legend=1 00:05:53.751 --rc geninfo_all_blocks=1 00:05:53.751 --rc geninfo_unexecuted_blocks=1 00:05:53.751 00:05:53.751 ' 00:05:53.751 18:26:09 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.751 --rc genhtml_branch_coverage=1 00:05:53.751 --rc genhtml_function_coverage=1 00:05:53.751 --rc genhtml_legend=1 00:05:53.751 --rc geninfo_all_blocks=1 00:05:53.751 --rc geninfo_unexecuted_blocks=1 00:05:53.751 00:05:53.751 ' 00:05:53.751 18:26:09 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.751 --rc genhtml_branch_coverage=1 00:05:53.751 --rc genhtml_function_coverage=1 00:05:53.751 --rc genhtml_legend=1 00:05:53.751 --rc geninfo_all_blocks=1 00:05:53.751 --rc geninfo_unexecuted_blocks=1 00:05:53.751 00:05:53.751 ' 00:05:53.751 18:26:09 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.751 --rc genhtml_branch_coverage=1 00:05:53.751 --rc genhtml_function_coverage=1 00:05:53.751 --rc genhtml_legend=1 00:05:53.751 --rc geninfo_all_blocks=1 00:05:53.751 --rc geninfo_unexecuted_blocks=1 00:05:53.751 00:05:53.751 ' 00:05:53.751 18:26:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:53.751 18:26:09 -- nvmf/common.sh@7 -- # uname -s 00:05:53.751 18:26:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.751 18:26:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.751 18:26:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.751 18:26:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.751 18:26:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.751 18:26:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.751 18:26:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.751 18:26:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.752 18:26:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.752 18:26:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.752 18:26:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:05:53.752 18:26:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:05:53.752 18:26:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.752 18:26:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.752 18:26:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:53.752 18:26:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.752 18:26:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:53.752 18:26:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.752 18:26:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.752 18:26:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.752 18:26:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.752 18:26:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.752 18:26:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.752 18:26:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.752 18:26:09 -- paths/export.sh@5 -- # export PATH 00:05:53.752 18:26:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.752 18:26:09 -- nvmf/common.sh@51 -- # : 0 00:05:53.752 18:26:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.752 18:26:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.752 18:26:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.752 18:26:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.752 18:26:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.752 18:26:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.752 18:26:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.752 18:26:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.752 18:26:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.752 18:26:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:53.752 18:26:09 -- spdk/autotest.sh@32 -- # uname -s 00:05:53.752 18:26:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:53.752 18:26:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:53.752 18:26:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:53.752 18:26:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:53.752 18:26:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:53.752 18:26:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:53.752 18:26:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:53.752 18:26:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:53.752 18:26:09 -- spdk/autotest.sh@48 -- # udevadm_pid=69217 00:05:53.752 18:26:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:53.752 18:26:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:53.752 18:26:09 -- pm/common@17 -- # local monitor 00:05:53.752 18:26:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:53.752 18:26:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:53.752 18:26:09 -- pm/common@25 -- # sleep 1 00:05:53.752 18:26:09 -- pm/common@21 -- # date +%s 00:05:53.752 18:26:09 -- pm/common@21 -- # date +%s 00:05:53.752 18:26:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734200769 00:05:53.752 18:26:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734200769 00:05:53.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734200769_collect-vmstat.pm.log 00:05:53.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734200769_collect-cpu-load.pm.log 00:05:54.688 18:26:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:54.688 18:26:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:54.688 18:26:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.688 18:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:54.688 18:26:10 -- spdk/autotest.sh@59 -- # create_test_list 00:05:54.688 18:26:10 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:54.688 18:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:54.688 18:26:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:54.688 18:26:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:54.688 18:26:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:54.688 18:26:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:54.688 18:26:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:54.688 18:26:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:54.688 18:26:10 -- common/autotest_common.sh@1455 -- # uname 00:05:54.688 18:26:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:54.688 18:26:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:54.688 18:26:10 -- common/autotest_common.sh@1475 -- # uname 00:05:54.688 18:26:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:54.688 18:26:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:54.688 18:26:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:54.960 lcov: LCOV version 1.15 00:05:54.960 18:26:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:09.897 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:09.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:22.121 18:26:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:22.121 18:26:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.121 18:26:37 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 18:26:37 -- spdk/autotest.sh@78 -- # rm -f 00:06:22.121 18:26:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:22.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.947 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:22.947 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:22.947 18:26:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:22.947 18:26:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:22.947 18:26:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:22.947 18:26:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:22.947 18:26:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:22.947 18:26:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:22.947 18:26:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:22.947 18:26:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:22.947 18:26:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:22.947 18:26:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:22.947 18:26:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:22.947 18:26:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:22.947 18:26:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:22.947 18:26:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:22.947 18:26:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:22.947 18:26:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:22.947 18:26:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:22.947 18:26:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:22.947 18:26:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:22.947 18:26:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:22.947 18:26:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:22.947 18:26:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:22.947 18:26:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:22.947 18:26:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:22.947 No valid GPT data, bailing 00:06:22.947 18:26:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:22.947 18:26:38 -- scripts/common.sh@394 -- # pt= 00:06:22.947 18:26:38 -- scripts/common.sh@395 -- # return 1 00:06:22.947 18:26:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:22.947 1+0 records in 00:06:22.947 1+0 records out 00:06:22.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524634 s, 200 MB/s 00:06:22.947 18:26:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:22.947 18:26:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:22.947 18:26:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:22.947 18:26:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:22.947 18:26:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:22.947 No valid GPT data, bailing 00:06:22.947 18:26:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:22.947 18:26:38 -- scripts/common.sh@394 -- # pt= 00:06:22.947 18:26:38 -- scripts/common.sh@395 -- # return 1 00:06:22.947 18:26:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:22.947 1+0 records in 00:06:22.947 1+0 records out 00:06:22.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476254 s, 220 MB/s 00:06:22.947 18:26:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:22.947 18:26:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:22.947 18:26:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:22.947 18:26:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:22.947 18:26:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:23.207 No valid GPT data, bailing 00:06:23.207 18:26:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:23.207 18:26:38 -- scripts/common.sh@394 -- # pt= 00:06:23.207 18:26:38 -- scripts/common.sh@395 -- # return 1 00:06:23.207 18:26:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:23.207 1+0 records in 00:06:23.207 1+0 records out 00:06:23.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433944 s, 242 MB/s 00:06:23.207 18:26:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:23.207 18:26:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:23.207 18:26:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:23.207 18:26:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:23.207 18:26:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:23.207 No valid GPT data, bailing 00:06:23.207 18:26:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:23.207 18:26:38 -- scripts/common.sh@394 -- # pt= 00:06:23.207 18:26:38 -- scripts/common.sh@395 -- # return 1 00:06:23.207 18:26:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:23.207 1+0 records in 00:06:23.207 1+0 records out 00:06:23.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494524 s, 212 MB/s 00:06:23.207 18:26:38 -- spdk/autotest.sh@105 -- # sync 00:06:23.466 18:26:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:23.466 18:26:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:23.466 18:26:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:26.000 18:26:41 -- spdk/autotest.sh@111 -- # uname -s 00:06:26.000 18:26:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:26.000 18:26:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:26.000 18:26:41 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:26.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:26.259 Hugepages 00:06:26.259 node hugesize free / total 00:06:26.259 node0 1048576kB 0 / 0 00:06:26.259 node0 2048kB 0 / 0 00:06:26.259 00:06:26.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:26.518 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:26.518 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:26.518 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:26.518 18:26:42 -- spdk/autotest.sh@117 -- # uname -s 00:06:26.518 18:26:42 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:26.518 18:26:42 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:26.518 18:26:42 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:27.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.345 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.345 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.345 18:26:43 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:28.721 18:26:44 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:28.721 18:26:44 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:28.721 18:26:44 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:28.721 18:26:44 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:28.721 18:26:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:28.721 18:26:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:28.721 18:26:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:28.721 18:26:44 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:28.721 18:26:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:28.721 18:26:44 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:28.721 18:26:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:28.721 18:26:44 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:28.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.979 Waiting for block devices as requested 00:06:28.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:28.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:29.255 18:26:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:29.255 18:26:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:29.255 18:26:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:29.255 18:26:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:29.255 18:26:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1541 -- # continue 00:06:29.255 18:26:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:29.255 18:26:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:29.255 18:26:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:29.255 18:26:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:29.255 18:26:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:29.255 18:26:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:29.255 18:26:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:29.256 18:26:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:29.256 18:26:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:29.256 18:26:44 -- common/autotest_common.sh@1541 -- # continue 00:06:29.256 18:26:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:29.256 18:26:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.256 18:26:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.256 18:26:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:29.256 18:26:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.256 18:26:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.256 18:26:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:29.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.101 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.101 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.101 18:26:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:30.101 18:26:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.101 18:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.101 18:26:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:30.101 18:26:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:30.101 18:26:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:30.101 18:26:45 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:30.101 18:26:45 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:30.101 18:26:45 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:30.101 18:26:45 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:30.101 18:26:45 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:30.101 18:26:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:30.101 18:26:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:30.101 18:26:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:30.101 18:26:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:30.101 18:26:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:30.360 18:26:45 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:30.360 18:26:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:30.360 18:26:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:30.360 18:26:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:30.360 18:26:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:30.360 18:26:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:30.360 18:26:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:30.360 18:26:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:30.360 18:26:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:30.360 18:26:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:30.360 18:26:45 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:30.360 18:26:45 -- common/autotest_common.sh@1570 -- # return 0 00:06:30.360 18:26:45 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:30.360 18:26:45 -- common/autotest_common.sh@1578 -- # return 0 00:06:30.360 18:26:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:30.360 18:26:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:30.360 18:26:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:30.360 18:26:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:30.360 18:26:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:30.360 18:26:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.360 18:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.360 18:26:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:30.360 18:26:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:30.360 18:26:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.360 18:26:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.360 18:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.360 ************************************ 00:06:30.360 START TEST env 00:06:30.360 ************************************ 00:06:30.360 18:26:45 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:30.360 * Looking for test storage... 00:06:30.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:30.360 18:26:45 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.360 18:26:45 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.360 18:26:45 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.360 18:26:46 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.360 18:26:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.360 18:26:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.360 18:26:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.360 18:26:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.360 18:26:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.360 18:26:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.360 18:26:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.360 18:26:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.360 18:26:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.360 18:26:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.360 18:26:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.360 18:26:46 env -- scripts/common.sh@344 -- # case "$op" in 00:06:30.360 18:26:46 env -- scripts/common.sh@345 -- # : 1 00:06:30.360 18:26:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.360 18:26:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.360 18:26:46 env -- scripts/common.sh@365 -- # decimal 1 00:06:30.360 18:26:46 env -- scripts/common.sh@353 -- # local d=1 00:06:30.360 18:26:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.360 18:26:46 env -- scripts/common.sh@355 -- # echo 1 00:06:30.360 18:26:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.360 18:26:46 env -- scripts/common.sh@366 -- # decimal 2 00:06:30.360 18:26:46 env -- scripts/common.sh@353 -- # local d=2 00:06:30.360 18:26:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.360 18:26:46 env -- scripts/common.sh@355 -- # echo 2 00:06:30.360 18:26:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.360 18:26:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.361 18:26:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.361 18:26:46 env -- scripts/common.sh@368 -- # return 0 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.361 --rc genhtml_branch_coverage=1 00:06:30.361 --rc genhtml_function_coverage=1 00:06:30.361 --rc genhtml_legend=1 00:06:30.361 --rc geninfo_all_blocks=1 00:06:30.361 --rc geninfo_unexecuted_blocks=1 00:06:30.361 00:06:30.361 ' 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.361 --rc genhtml_branch_coverage=1 00:06:30.361 --rc genhtml_function_coverage=1 00:06:30.361 --rc genhtml_legend=1 00:06:30.361 --rc geninfo_all_blocks=1 00:06:30.361 --rc geninfo_unexecuted_blocks=1 00:06:30.361 00:06:30.361 ' 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.361 --rc genhtml_branch_coverage=1 00:06:30.361 --rc genhtml_function_coverage=1 00:06:30.361 --rc genhtml_legend=1 00:06:30.361 --rc geninfo_all_blocks=1 00:06:30.361 --rc geninfo_unexecuted_blocks=1 00:06:30.361 00:06:30.361 ' 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.361 --rc genhtml_branch_coverage=1 00:06:30.361 --rc genhtml_function_coverage=1 00:06:30.361 --rc genhtml_legend=1 00:06:30.361 --rc geninfo_all_blocks=1 00:06:30.361 --rc geninfo_unexecuted_blocks=1 00:06:30.361 00:06:30.361 ' 00:06:30.361 18:26:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.361 18:26:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.361 18:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.361 ************************************ 00:06:30.361 START TEST env_memory 00:06:30.361 ************************************ 00:06:30.361 18:26:46 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:30.619 00:06:30.619 00:06:30.619 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.619 http://cunit.sourceforge.net/ 00:06:30.619 00:06:30.619 00:06:30.619 Suite: memory 00:06:30.619 Test: alloc and free memory map ...[2024-12-14 18:26:46.144545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:30.619 passed 00:06:30.620 Test: mem map translation ...[2024-12-14 18:26:46.175734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:30.620 [2024-12-14 18:26:46.175782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:30.620 [2024-12-14 18:26:46.175846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:30.620 [2024-12-14 18:26:46.175857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:30.620 passed 00:06:30.620 Test: mem map registration ...[2024-12-14 18:26:46.239587] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:30.620 [2024-12-14 18:26:46.239641] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:30.620 passed 00:06:30.620 Test: mem map adjacent registrations ...passed 00:06:30.620 00:06:30.620 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.620 suites 1 1 n/a 0 0 00:06:30.620 tests 4 4 4 0 0 00:06:30.620 asserts 152 152 152 0 n/a 00:06:30.620 00:06:30.620 Elapsed time = 0.214 seconds 00:06:30.620 00:06:30.620 real 0m0.233s 00:06:30.620 user 0m0.216s 00:06:30.620 sys 0m0.014s 00:06:30.620 18:26:46 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.620 18:26:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:30.620 ************************************ 00:06:30.620 END TEST env_memory 00:06:30.620 ************************************ 00:06:30.879 18:26:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:30.879 18:26:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.879 18:26:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.879 18:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.879 ************************************ 00:06:30.879 START TEST env_vtophys 00:06:30.879 ************************************ 00:06:30.879 18:26:46 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:30.879 EAL: lib.eal log level changed from notice to debug 00:06:30.879 EAL: Detected lcore 0 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 1 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 2 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 3 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 4 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 5 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 6 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 7 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 8 as core 0 on socket 0 00:06:30.879 EAL: Detected lcore 9 as core 0 on socket 0 00:06:30.879 EAL: Maximum logical cores by configuration: 128 00:06:30.879 EAL: Detected CPU lcores: 10 00:06:30.879 EAL: Detected NUMA nodes: 1 00:06:30.879 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:30.879 EAL: Detected shared linkage of DPDK 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:30.879 EAL: Registered [vdev] bus. 00:06:30.879 EAL: bus.vdev log level changed from disabled to notice 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:30.879 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:30.879 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:30.879 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:30.879 EAL: No shared files mode enabled, IPC will be disabled 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Selected IOVA mode 'PA' 00:06:30.879 EAL: Probing VFIO support... 00:06:30.879 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.879 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:30.879 EAL: Ask a virtual area of 0x2e000 bytes 00:06:30.879 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:30.879 EAL: Setting up physically contiguous memory... 00:06:30.879 EAL: Setting maximum number of open files to 524288 00:06:30.879 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:30.879 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:30.879 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.879 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:30.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.879 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.879 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:30.879 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:30.879 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.879 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:30.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.879 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.879 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:30.879 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:30.879 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.879 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:30.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.879 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.879 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:30.879 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:30.879 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.879 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:30.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.879 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.879 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:30.879 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:30.879 EAL: Hugepages will be freed exactly as allocated. 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: TSC frequency is ~2200000 KHz 00:06:30.879 EAL: Main lcore 0 is ready (tid=7f7f6ec98a00;cpuset=[0]) 00:06:30.879 EAL: Trying to obtain current memory policy. 00:06:30.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.879 EAL: Restoring previous memory policy: 0 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was expanded by 2MB 00:06:30.879 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:30.879 EAL: Mem event callback 'spdk:(nil)' registered 00:06:30.879 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:30.879 00:06:30.879 00:06:30.879 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.879 http://cunit.sourceforge.net/ 00:06:30.879 00:06:30.879 00:06:30.879 Suite: components_suite 00:06:30.879 Test: vtophys_malloc_test ...passed 00:06:30.879 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:30.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.879 EAL: Restoring previous memory policy: 4 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was expanded by 4MB 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was shrunk by 4MB 00:06:30.879 EAL: Trying to obtain current memory policy. 00:06:30.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.879 EAL: Restoring previous memory policy: 4 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was expanded by 6MB 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was shrunk by 6MB 00:06:30.879 EAL: Trying to obtain current memory policy. 00:06:30.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.879 EAL: Restoring previous memory policy: 4 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was expanded by 10MB 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.879 EAL: Heap on socket 0 was shrunk by 10MB 00:06:30.879 EAL: Trying to obtain current memory policy. 00:06:30.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.879 EAL: Restoring previous memory policy: 4 00:06:30.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.879 EAL: request: mp_malloc_sync 00:06:30.879 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was expanded by 18MB 00:06:30.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.880 EAL: request: mp_malloc_sync 00:06:30.880 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was shrunk by 18MB 00:06:30.880 EAL: Trying to obtain current memory policy. 00:06:30.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.880 EAL: Restoring previous memory policy: 4 00:06:30.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.880 EAL: request: mp_malloc_sync 00:06:30.880 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was expanded by 34MB 00:06:30.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.880 EAL: request: mp_malloc_sync 00:06:30.880 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was shrunk by 34MB 00:06:30.880 EAL: Trying to obtain current memory policy. 00:06:30.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.880 EAL: Restoring previous memory policy: 4 00:06:30.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.880 EAL: request: mp_malloc_sync 00:06:30.880 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was expanded by 66MB 00:06:30.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.880 EAL: request: mp_malloc_sync 00:06:30.880 EAL: No shared files mode enabled, IPC is disabled 00:06:30.880 EAL: Heap on socket 0 was shrunk by 66MB 00:06:30.880 EAL: Trying to obtain current memory policy. 00:06:30.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.138 EAL: Restoring previous memory policy: 4 00:06:31.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.138 EAL: request: mp_malloc_sync 00:06:31.139 EAL: No shared files mode enabled, IPC is disabled 00:06:31.139 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.139 EAL: request: mp_malloc_sync 00:06:31.139 EAL: No shared files mode enabled, IPC is disabled 00:06:31.139 EAL: Heap on socket 0 was shrunk by 130MB 00:06:31.139 EAL: Trying to obtain current memory policy. 00:06:31.139 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.139 EAL: Restoring previous memory policy: 4 00:06:31.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.139 EAL: request: mp_malloc_sync 00:06:31.139 EAL: No shared files mode enabled, IPC is disabled 00:06:31.139 EAL: Heap on socket 0 was expanded by 258MB 00:06:31.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.397 EAL: request: mp_malloc_sync 00:06:31.397 EAL: No shared files mode enabled, IPC is disabled 00:06:31.397 EAL: Heap on socket 0 was shrunk by 258MB 00:06:31.397 EAL: Trying to obtain current memory policy. 00:06:31.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.397 EAL: Restoring previous memory policy: 4 00:06:31.397 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.397 EAL: request: mp_malloc_sync 00:06:31.397 EAL: No shared files mode enabled, IPC is disabled 00:06:31.397 EAL: Heap on socket 0 was expanded by 514MB 00:06:31.656 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.656 EAL: request: mp_malloc_sync 00:06:31.656 EAL: No shared files mode enabled, IPC is disabled 00:06:31.656 EAL: Heap on socket 0 was shrunk by 514MB 00:06:31.656 EAL: Trying to obtain current memory policy. 00:06:31.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.221 EAL: Restoring previous memory policy: 4 00:06:32.221 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.221 EAL: request: mp_malloc_sync 00:06:32.221 EAL: No shared files mode enabled, IPC is disabled 00:06:32.221 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.481 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.740 passed 00:06:32.740 00:06:32.740 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.740 suites 1 1 n/a 0 0 00:06:32.740 tests 2 2 2 0 0 00:06:32.740 asserts 5274 5274 5274 0 n/a 00:06:32.740 00:06:32.740 Elapsed time = 1.694 seconds 00:06:32.740 EAL: request: mp_malloc_sync 00:06:32.740 EAL: No shared files mode enabled, IPC is disabled 00:06:32.740 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:32.740 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.740 EAL: request: mp_malloc_sync 00:06:32.740 EAL: No shared files mode enabled, IPC is disabled 00:06:32.740 EAL: Heap on socket 0 was shrunk by 2MB 00:06:32.740 EAL: No shared files mode enabled, IPC is disabled 00:06:32.740 EAL: No shared files mode enabled, IPC is disabled 00:06:32.740 EAL: No shared files mode enabled, IPC is disabled 00:06:32.740 00:06:32.740 real 0m1.896s 00:06:32.740 user 0m1.082s 00:06:32.740 sys 0m0.678s 00:06:32.740 18:26:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.740 18:26:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:32.740 ************************************ 00:06:32.740 END TEST env_vtophys 00:06:32.740 ************************************ 00:06:32.740 18:26:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.740 18:26:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.740 18:26:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.740 18:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.740 ************************************ 00:06:32.740 START TEST env_pci 00:06:32.740 ************************************ 00:06:32.740 18:26:48 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.740 00:06:32.740 00:06:32.740 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.740 http://cunit.sourceforge.net/ 00:06:32.740 00:06:32.740 00:06:32.740 Suite: pci 00:06:32.740 Test: pci_hook ...[2024-12-14 18:26:48.351425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71434 has claimed it 00:06:32.740 passed 00:06:32.740 00:06:32.740 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.740 suites 1 1 n/a 0 0 00:06:32.740 tests 1 1 1 0 0 00:06:32.740 asserts 25 25 25 0 n/a 00:06:32.740 00:06:32.740 Elapsed time = 0.002 seconds 00:06:32.740 EAL: Cannot find device (10000:00:01.0) 00:06:32.740 EAL: Failed to attach device on primary process 00:06:32.740 00:06:32.740 real 0m0.020s 00:06:32.740 user 0m0.007s 00:06:32.740 sys 0m0.013s 00:06:32.740 18:26:48 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.740 18:26:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:32.740 ************************************ 00:06:32.740 END TEST env_pci 00:06:32.740 ************************************ 00:06:32.740 18:26:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.740 18:26:48 env -- env/env.sh@15 -- # uname 00:06:32.740 18:26:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.740 18:26:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.740 18:26:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.740 18:26:48 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:32.740 18:26:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.740 18:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.740 ************************************ 00:06:32.740 START TEST env_dpdk_post_init 00:06:32.740 ************************************ 00:06:32.740 18:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.740 EAL: Detected CPU lcores: 10 00:06:32.740 EAL: Detected NUMA nodes: 1 00:06:32.740 EAL: Detected shared linkage of DPDK 00:06:32.740 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.740 EAL: Selected IOVA mode 'PA' 00:06:32.999 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.999 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:32.999 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:32.999 Starting DPDK initialization... 00:06:32.999 Starting SPDK post initialization... 00:06:32.999 SPDK NVMe probe 00:06:32.999 Attaching to 0000:00:10.0 00:06:32.999 Attaching to 0000:00:11.0 00:06:32.999 Attached to 0000:00:10.0 00:06:32.999 Attached to 0000:00:11.0 00:06:32.999 Cleaning up... 00:06:32.999 00:06:32.999 real 0m0.173s 00:06:32.999 user 0m0.040s 00:06:32.999 sys 0m0.033s 00:06:32.999 18:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.999 18:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.999 ************************************ 00:06:32.999 END TEST env_dpdk_post_init 00:06:32.999 ************************************ 00:06:32.999 18:26:48 env -- env/env.sh@26 -- # uname 00:06:32.999 18:26:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:32.999 18:26:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.999 18:26:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.999 18:26:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.999 18:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.999 ************************************ 00:06:32.999 START TEST env_mem_callbacks 00:06:32.999 ************************************ 00:06:32.999 18:26:48 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.999 EAL: Detected CPU lcores: 10 00:06:32.999 EAL: Detected NUMA nodes: 1 00:06:32.999 EAL: Detected shared linkage of DPDK 00:06:32.999 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.999 EAL: Selected IOVA mode 'PA' 00:06:33.258 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:33.258 00:06:33.258 00:06:33.258 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.258 http://cunit.sourceforge.net/ 00:06:33.258 00:06:33.258 00:06:33.258 Suite: memory 00:06:33.258 Test: test ... 00:06:33.258 register 0x200000200000 2097152 00:06:33.258 malloc 3145728 00:06:33.258 register 0x200000400000 4194304 00:06:33.258 buf 0x200000500000 len 3145728 PASSED 00:06:33.258 malloc 64 00:06:33.258 buf 0x2000004fff40 len 64 PASSED 00:06:33.258 malloc 4194304 00:06:33.258 register 0x200000800000 6291456 00:06:33.258 buf 0x200000a00000 len 4194304 PASSED 00:06:33.258 free 0x200000500000 3145728 00:06:33.258 free 0x2000004fff40 64 00:06:33.258 unregister 0x200000400000 4194304 PASSED 00:06:33.258 free 0x200000a00000 4194304 00:06:33.258 unregister 0x200000800000 6291456 PASSED 00:06:33.258 malloc 8388608 00:06:33.258 register 0x200000400000 10485760 00:06:33.258 buf 0x200000600000 len 8388608 PASSED 00:06:33.258 free 0x200000600000 8388608 00:06:33.258 unregister 0x200000400000 10485760 PASSED 00:06:33.258 passed 00:06:33.258 00:06:33.258 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.258 suites 1 1 n/a 0 0 00:06:33.258 tests 1 1 1 0 0 00:06:33.258 asserts 15 15 15 0 n/a 00:06:33.258 00:06:33.258 Elapsed time = 0.008 seconds 00:06:33.258 00:06:33.258 real 0m0.141s 00:06:33.258 user 0m0.017s 00:06:33.258 sys 0m0.023s 00:06:33.258 18:26:48 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.258 18:26:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:33.258 ************************************ 00:06:33.258 END TEST env_mem_callbacks 00:06:33.258 ************************************ 00:06:33.258 00:06:33.258 real 0m2.943s 00:06:33.258 user 0m1.556s 00:06:33.258 sys 0m1.029s 00:06:33.258 ************************************ 00:06:33.258 END TEST env 00:06:33.258 ************************************ 00:06:33.258 18:26:48 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.258 18:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.258 18:26:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:33.259 18:26:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.259 18:26:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.259 18:26:48 -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 ************************************ 00:06:33.259 START TEST rpc 00:06:33.259 ************************************ 00:06:33.259 18:26:48 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:33.259 * Looking for test storage... 00:06:33.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:33.259 18:26:48 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.259 18:26:48 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.259 18:26:48 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.517 18:26:49 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.517 18:26:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.517 18:26:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.517 18:26:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.517 18:26:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.517 18:26:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.517 18:26:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.517 18:26:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.518 18:26:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.518 18:26:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.518 18:26:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.518 18:26:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:33.518 18:26:49 rpc -- scripts/common.sh@345 -- # : 1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.518 18:26:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.518 18:26:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@353 -- # local d=1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.518 18:26:49 rpc -- scripts/common.sh@355 -- # echo 1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.518 18:26:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:33.518 18:26:49 rpc -- scripts/common.sh@353 -- # local d=2 00:06:33.518 18:26:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.518 18:26:49 rpc -- scripts/common.sh@355 -- # echo 2 00:06:33.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.518 18:26:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.518 18:26:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.518 18:26:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.518 18:26:49 rpc -- scripts/common.sh@368 -- # return 0 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.518 --rc genhtml_branch_coverage=1 00:06:33.518 --rc genhtml_function_coverage=1 00:06:33.518 --rc genhtml_legend=1 00:06:33.518 --rc geninfo_all_blocks=1 00:06:33.518 --rc geninfo_unexecuted_blocks=1 00:06:33.518 00:06:33.518 ' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.518 --rc genhtml_branch_coverage=1 00:06:33.518 --rc genhtml_function_coverage=1 00:06:33.518 --rc genhtml_legend=1 00:06:33.518 --rc geninfo_all_blocks=1 00:06:33.518 --rc geninfo_unexecuted_blocks=1 00:06:33.518 00:06:33.518 ' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.518 --rc genhtml_branch_coverage=1 00:06:33.518 --rc genhtml_function_coverage=1 00:06:33.518 --rc genhtml_legend=1 00:06:33.518 --rc geninfo_all_blocks=1 00:06:33.518 --rc geninfo_unexecuted_blocks=1 00:06:33.518 00:06:33.518 ' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.518 --rc genhtml_branch_coverage=1 00:06:33.518 --rc genhtml_function_coverage=1 00:06:33.518 --rc genhtml_legend=1 00:06:33.518 --rc geninfo_all_blocks=1 00:06:33.518 --rc geninfo_unexecuted_blocks=1 00:06:33.518 00:06:33.518 ' 00:06:33.518 18:26:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71551 00:06:33.518 18:26:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.518 18:26:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:33.518 18:26:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71551 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@831 -- # '[' -z 71551 ']' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.518 18:26:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.518 [2024-12-14 18:26:49.145772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.518 [2024-12-14 18:26:49.146070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71551 ] 00:06:33.776 [2024-12-14 18:26:49.279308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.776 [2024-12-14 18:26:49.358579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:33.777 [2024-12-14 18:26:49.358911] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71551' to capture a snapshot of events at runtime. 00:06:33.777 [2024-12-14 18:26:49.359080] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.777 [2024-12-14 18:26:49.359131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.777 [2024-12-14 18:26:49.359264] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71551 for offline analysis/debug. 00:06:33.777 [2024-12-14 18:26:49.359351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.035 18:26:49 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.035 18:26:49 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.035 18:26:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.035 18:26:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.035 18:26:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:34.035 18:26:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:34.035 18:26:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.035 18:26:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.035 18:26:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.035 ************************************ 00:06:34.035 START TEST rpc_integrity 00:06:34.035 ************************************ 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:34.035 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.035 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.035 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.035 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.035 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.035 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.294 { 00:06:34.294 "aliases": [ 00:06:34.294 "62bb2ec5-7014-4893-88fb-fd517bd0bb83" 00:06:34.294 ], 00:06:34.294 "assigned_rate_limits": { 00:06:34.294 "r_mbytes_per_sec": 0, 00:06:34.294 "rw_ios_per_sec": 0, 00:06:34.294 "rw_mbytes_per_sec": 0, 00:06:34.294 "w_mbytes_per_sec": 0 00:06:34.294 }, 00:06:34.294 "block_size": 512, 00:06:34.294 "claimed": false, 00:06:34.294 "driver_specific": {}, 00:06:34.294 "memory_domains": [ 00:06:34.294 { 00:06:34.294 "dma_device_id": "system", 00:06:34.294 "dma_device_type": 1 00:06:34.294 }, 00:06:34.294 { 00:06:34.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.294 "dma_device_type": 2 00:06:34.294 } 00:06:34.294 ], 00:06:34.294 "name": "Malloc0", 00:06:34.294 "num_blocks": 16384, 00:06:34.294 "product_name": "Malloc disk", 00:06:34.294 "supported_io_types": { 00:06:34.294 "abort": true, 00:06:34.294 "compare": false, 00:06:34.294 "compare_and_write": false, 00:06:34.294 "copy": true, 00:06:34.294 "flush": true, 00:06:34.294 "get_zone_info": false, 00:06:34.294 "nvme_admin": false, 00:06:34.294 "nvme_io": false, 00:06:34.294 "nvme_io_md": false, 00:06:34.294 "nvme_iov_md": false, 00:06:34.294 "read": true, 00:06:34.294 "reset": true, 00:06:34.294 "seek_data": false, 00:06:34.294 "seek_hole": false, 00:06:34.294 "unmap": true, 00:06:34.294 "write": true, 00:06:34.294 "write_zeroes": true, 00:06:34.294 "zcopy": true, 00:06:34.294 "zone_append": false, 00:06:34.294 "zone_management": false 00:06:34.294 }, 00:06:34.294 "uuid": "62bb2ec5-7014-4893-88fb-fd517bd0bb83", 00:06:34.294 "zoned": false 00:06:34.294 } 00:06:34.294 ]' 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.294 [2024-12-14 18:26:49.876526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:34.294 [2024-12-14 18:26:49.876586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.294 [2024-12-14 18:26:49.876637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xba8a80 00:06:34.294 [2024-12-14 18:26:49.876649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.294 [2024-12-14 18:26:49.878194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.294 [2024-12-14 18:26:49.878374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.294 Passthru0 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.294 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.294 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.295 { 00:06:34.295 "aliases": [ 00:06:34.295 "62bb2ec5-7014-4893-88fb-fd517bd0bb83" 00:06:34.295 ], 00:06:34.295 "assigned_rate_limits": { 00:06:34.295 "r_mbytes_per_sec": 0, 00:06:34.295 "rw_ios_per_sec": 0, 00:06:34.295 "rw_mbytes_per_sec": 0, 00:06:34.295 "w_mbytes_per_sec": 0 00:06:34.295 }, 00:06:34.295 "block_size": 512, 00:06:34.295 "claim_type": "exclusive_write", 00:06:34.295 "claimed": true, 00:06:34.295 "driver_specific": {}, 00:06:34.295 "memory_domains": [ 00:06:34.295 { 00:06:34.295 "dma_device_id": "system", 00:06:34.295 "dma_device_type": 1 00:06:34.295 }, 00:06:34.295 { 00:06:34.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.295 "dma_device_type": 2 00:06:34.295 } 00:06:34.295 ], 00:06:34.295 "name": "Malloc0", 00:06:34.295 "num_blocks": 16384, 00:06:34.295 "product_name": "Malloc disk", 00:06:34.295 "supported_io_types": { 00:06:34.295 "abort": true, 00:06:34.295 "compare": false, 00:06:34.295 "compare_and_write": false, 00:06:34.295 "copy": true, 00:06:34.295 "flush": true, 00:06:34.295 "get_zone_info": false, 00:06:34.295 "nvme_admin": false, 00:06:34.295 "nvme_io": false, 00:06:34.295 "nvme_io_md": false, 00:06:34.295 "nvme_iov_md": false, 00:06:34.295 "read": true, 00:06:34.295 "reset": true, 00:06:34.295 "seek_data": false, 00:06:34.295 "seek_hole": false, 00:06:34.295 "unmap": true, 00:06:34.295 "write": true, 00:06:34.295 "write_zeroes": true, 00:06:34.295 "zcopy": true, 00:06:34.295 "zone_append": false, 00:06:34.295 "zone_management": false 00:06:34.295 }, 00:06:34.295 "uuid": "62bb2ec5-7014-4893-88fb-fd517bd0bb83", 00:06:34.295 "zoned": false 00:06:34.295 }, 00:06:34.295 { 00:06:34.295 "aliases": [ 00:06:34.295 "f469f8fd-7df3-50d4-b85b-0c2de8513005" 00:06:34.295 ], 00:06:34.295 "assigned_rate_limits": { 00:06:34.295 "r_mbytes_per_sec": 0, 00:06:34.295 "rw_ios_per_sec": 0, 00:06:34.295 "rw_mbytes_per_sec": 0, 00:06:34.295 "w_mbytes_per_sec": 0 00:06:34.295 }, 00:06:34.295 "block_size": 512, 00:06:34.295 "claimed": false, 00:06:34.295 "driver_specific": { 00:06:34.295 "passthru": { 00:06:34.295 "base_bdev_name": "Malloc0", 00:06:34.295 "name": "Passthru0" 00:06:34.295 } 00:06:34.295 }, 00:06:34.295 "memory_domains": [ 00:06:34.295 { 00:06:34.295 "dma_device_id": "system", 00:06:34.295 "dma_device_type": 1 00:06:34.295 }, 00:06:34.295 { 00:06:34.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.295 "dma_device_type": 2 00:06:34.295 } 00:06:34.295 ], 00:06:34.295 "name": "Passthru0", 00:06:34.295 "num_blocks": 16384, 00:06:34.295 "product_name": "passthru", 00:06:34.295 "supported_io_types": { 00:06:34.295 "abort": true, 00:06:34.295 "compare": false, 00:06:34.295 "compare_and_write": false, 00:06:34.295 "copy": true, 00:06:34.295 "flush": true, 00:06:34.295 "get_zone_info": false, 00:06:34.295 "nvme_admin": false, 00:06:34.295 "nvme_io": false, 00:06:34.295 "nvme_io_md": false, 00:06:34.295 "nvme_iov_md": false, 00:06:34.295 "read": true, 00:06:34.295 "reset": true, 00:06:34.295 "seek_data": false, 00:06:34.295 "seek_hole": false, 00:06:34.295 "unmap": true, 00:06:34.295 "write": true, 00:06:34.295 "write_zeroes": true, 00:06:34.295 "zcopy": true, 00:06:34.295 "zone_append": false, 00:06:34.295 "zone_management": false 00:06:34.295 }, 00:06:34.295 "uuid": "f469f8fd-7df3-50d4-b85b-0c2de8513005", 00:06:34.295 "zoned": false 00:06:34.295 } 00:06:34.295 ]' 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.295 18:26:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:34.295 18:26:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:34.554 ************************************ 00:06:34.554 END TEST rpc_integrity 00:06:34.554 ************************************ 00:06:34.554 18:26:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:34.554 00:06:34.554 real 0m0.343s 00:06:34.554 user 0m0.231s 00:06:34.554 sys 0m0.035s 00:06:34.554 18:26:50 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 18:26:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:34.554 18:26:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.554 18:26:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.554 18:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 ************************************ 00:06:34.554 START TEST rpc_plugins 00:06:34.554 ************************************ 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:34.554 { 00:06:34.554 "aliases": [ 00:06:34.554 "e7ac9141-606a-4f67-8c9f-1b8ed901d15a" 00:06:34.554 ], 00:06:34.554 "assigned_rate_limits": { 00:06:34.554 "r_mbytes_per_sec": 0, 00:06:34.554 "rw_ios_per_sec": 0, 00:06:34.554 "rw_mbytes_per_sec": 0, 00:06:34.554 "w_mbytes_per_sec": 0 00:06:34.554 }, 00:06:34.554 "block_size": 4096, 00:06:34.554 "claimed": false, 00:06:34.554 "driver_specific": {}, 00:06:34.554 "memory_domains": [ 00:06:34.554 { 00:06:34.554 "dma_device_id": "system", 00:06:34.554 "dma_device_type": 1 00:06:34.554 }, 00:06:34.554 { 00:06:34.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.554 "dma_device_type": 2 00:06:34.554 } 00:06:34.554 ], 00:06:34.554 "name": "Malloc1", 00:06:34.554 "num_blocks": 256, 00:06:34.554 "product_name": "Malloc disk", 00:06:34.554 "supported_io_types": { 00:06:34.554 "abort": true, 00:06:34.554 "compare": false, 00:06:34.554 "compare_and_write": false, 00:06:34.554 "copy": true, 00:06:34.554 "flush": true, 00:06:34.554 "get_zone_info": false, 00:06:34.554 "nvme_admin": false, 00:06:34.554 "nvme_io": false, 00:06:34.554 "nvme_io_md": false, 00:06:34.554 "nvme_iov_md": false, 00:06:34.554 "read": true, 00:06:34.554 "reset": true, 00:06:34.554 "seek_data": false, 00:06:34.554 "seek_hole": false, 00:06:34.554 "unmap": true, 00:06:34.554 "write": true, 00:06:34.554 "write_zeroes": true, 00:06:34.554 "zcopy": true, 00:06:34.554 "zone_append": false, 00:06:34.554 "zone_management": false 00:06:34.554 }, 00:06:34.554 "uuid": "e7ac9141-606a-4f67-8c9f-1b8ed901d15a", 00:06:34.554 "zoned": false 00:06:34.554 } 00:06:34.554 ]' 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:34.554 ************************************ 00:06:34.554 END TEST rpc_plugins 00:06:34.554 ************************************ 00:06:34.554 18:26:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:34.554 00:06:34.554 real 0m0.169s 00:06:34.554 user 0m0.105s 00:06:34.554 sys 0m0.026s 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.554 18:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.813 18:26:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:34.813 18:26:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.813 18:26:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.813 18:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.813 ************************************ 00:06:34.813 START TEST rpc_trace_cmd_test 00:06:34.813 ************************************ 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.813 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:34.813 "bdev": { 00:06:34.813 "mask": "0x8", 00:06:34.813 "tpoint_mask": "0xffffffffffffffff" 00:06:34.813 }, 00:06:34.813 "bdev_nvme": { 00:06:34.813 "mask": "0x4000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "bdev_raid": { 00:06:34.813 "mask": "0x20000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "blob": { 00:06:34.813 "mask": "0x10000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "blobfs": { 00:06:34.813 "mask": "0x80", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "dsa": { 00:06:34.813 "mask": "0x200", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "ftl": { 00:06:34.813 "mask": "0x40", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "iaa": { 00:06:34.813 "mask": "0x1000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "iscsi_conn": { 00:06:34.813 "mask": "0x2", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "nvme_pcie": { 00:06:34.813 "mask": "0x800", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "nvme_tcp": { 00:06:34.813 "mask": "0x2000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "nvmf_rdma": { 00:06:34.813 "mask": "0x10", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "nvmf_tcp": { 00:06:34.813 "mask": "0x20", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "scsi": { 00:06:34.813 "mask": "0x4", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "sock": { 00:06:34.813 "mask": "0x8000", 00:06:34.813 "tpoint_mask": "0x0" 00:06:34.813 }, 00:06:34.813 "thread": { 00:06:34.813 "mask": "0x400", 00:06:34.814 "tpoint_mask": "0x0" 00:06:34.814 }, 00:06:34.814 "tpoint_group_mask": "0x8", 00:06:34.814 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71551" 00:06:34.814 }' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:34.814 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:35.072 ************************************ 00:06:35.073 END TEST rpc_trace_cmd_test 00:06:35.073 ************************************ 00:06:35.073 18:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:35.073 00:06:35.073 real 0m0.285s 00:06:35.073 user 0m0.250s 00:06:35.073 sys 0m0.026s 00:06:35.073 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.073 18:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.073 18:26:50 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:35.073 18:26:50 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:35.073 18:26:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.073 18:26:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.073 18:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.073 ************************************ 00:06:35.073 START TEST go_rpc 00:06:35.073 ************************************ 00:06:35.073 18:26:50 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:35.073 18:26:50 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.073 18:26:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.073 18:26:50 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["64068780-1807-48eb-b595-6b0841cf57af"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"64068780-1807-48eb-b595-6b0841cf57af","zoned":false}]' 00:06:35.073 18:26:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:35.331 18:26:50 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.331 18:26:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.331 18:26:50 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:06:35.331 ************************************ 00:06:35.331 END TEST go_rpc 00:06:35.331 ************************************ 00:06:35.331 18:26:50 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:35.331 00:06:35.331 real 0m0.232s 00:06:35.331 user 0m0.154s 00:06:35.331 sys 0m0.039s 00:06:35.331 18:26:50 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.331 18:26:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.331 18:26:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:35.331 18:26:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:35.331 18:26:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.331 18:26:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.331 18:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.331 ************************************ 00:06:35.331 START TEST rpc_daemon_integrity 00:06:35.331 ************************************ 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:35.332 18:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:35.332 { 00:06:35.332 "aliases": [ 00:06:35.332 "6a46487e-6e78-4d12-b7d1-79930c13e09c" 00:06:35.332 ], 00:06:35.332 "assigned_rate_limits": { 00:06:35.332 "r_mbytes_per_sec": 0, 00:06:35.332 "rw_ios_per_sec": 0, 00:06:35.332 "rw_mbytes_per_sec": 0, 00:06:35.332 "w_mbytes_per_sec": 0 00:06:35.332 }, 00:06:35.332 "block_size": 512, 00:06:35.332 "claimed": false, 00:06:35.332 "driver_specific": {}, 00:06:35.332 "memory_domains": [ 00:06:35.332 { 00:06:35.332 "dma_device_id": "system", 00:06:35.332 "dma_device_type": 1 00:06:35.332 }, 00:06:35.332 { 00:06:35.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.332 "dma_device_type": 2 00:06:35.332 } 00:06:35.332 ], 00:06:35.332 "name": "Malloc3", 00:06:35.332 "num_blocks": 16384, 00:06:35.332 "product_name": "Malloc disk", 00:06:35.332 "supported_io_types": { 00:06:35.332 "abort": true, 00:06:35.332 "compare": false, 00:06:35.332 "compare_and_write": false, 00:06:35.332 "copy": true, 00:06:35.332 "flush": true, 00:06:35.332 "get_zone_info": false, 00:06:35.332 "nvme_admin": false, 00:06:35.332 "nvme_io": false, 00:06:35.332 "nvme_io_md": false, 00:06:35.332 "nvme_iov_md": false, 00:06:35.332 "read": true, 00:06:35.332 "reset": true, 00:06:35.332 "seek_data": false, 00:06:35.332 "seek_hole": false, 00:06:35.332 "unmap": true, 00:06:35.332 "write": true, 00:06:35.332 "write_zeroes": true, 00:06:35.332 "zcopy": true, 00:06:35.332 "zone_append": false, 00:06:35.332 "zone_management": false 00:06:35.332 }, 00:06:35.332 "uuid": "6a46487e-6e78-4d12-b7d1-79930c13e09c", 00:06:35.332 "zoned": false 00:06:35.332 } 00:06:35.332 ]' 00:06:35.332 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 [2024-12-14 18:26:51.108988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:35.591 [2024-12-14 18:26:51.109046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.591 [2024-12-14 18:26:51.109062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc39560 00:06:35.591 [2024-12-14 18:26:51.109071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.591 [2024-12-14 18:26:51.110273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.591 [2024-12-14 18:26:51.110311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:35.591 Passthru0 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:35.591 { 00:06:35.591 "aliases": [ 00:06:35.591 "6a46487e-6e78-4d12-b7d1-79930c13e09c" 00:06:35.591 ], 00:06:35.591 "assigned_rate_limits": { 00:06:35.591 "r_mbytes_per_sec": 0, 00:06:35.591 "rw_ios_per_sec": 0, 00:06:35.591 "rw_mbytes_per_sec": 0, 00:06:35.591 "w_mbytes_per_sec": 0 00:06:35.591 }, 00:06:35.591 "block_size": 512, 00:06:35.591 "claim_type": "exclusive_write", 00:06:35.591 "claimed": true, 00:06:35.591 "driver_specific": {}, 00:06:35.591 "memory_domains": [ 00:06:35.591 { 00:06:35.591 "dma_device_id": "system", 00:06:35.591 "dma_device_type": 1 00:06:35.591 }, 00:06:35.591 { 00:06:35.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.591 "dma_device_type": 2 00:06:35.591 } 00:06:35.591 ], 00:06:35.591 "name": "Malloc3", 00:06:35.591 "num_blocks": 16384, 00:06:35.591 "product_name": "Malloc disk", 00:06:35.591 "supported_io_types": { 00:06:35.591 "abort": true, 00:06:35.591 "compare": false, 00:06:35.591 "compare_and_write": false, 00:06:35.591 "copy": true, 00:06:35.591 "flush": true, 00:06:35.591 "get_zone_info": false, 00:06:35.591 "nvme_admin": false, 00:06:35.591 "nvme_io": false, 00:06:35.591 "nvme_io_md": false, 00:06:35.591 "nvme_iov_md": false, 00:06:35.591 "read": true, 00:06:35.591 "reset": true, 00:06:35.591 "seek_data": false, 00:06:35.591 "seek_hole": false, 00:06:35.591 "unmap": true, 00:06:35.591 "write": true, 00:06:35.591 "write_zeroes": true, 00:06:35.591 "zcopy": true, 00:06:35.591 "zone_append": false, 00:06:35.591 "zone_management": false 00:06:35.591 }, 00:06:35.591 "uuid": "6a46487e-6e78-4d12-b7d1-79930c13e09c", 00:06:35.591 "zoned": false 00:06:35.591 }, 00:06:35.591 { 00:06:35.591 "aliases": [ 00:06:35.591 "6234bf4e-4db5-559b-80b4-10ef7b63c811" 00:06:35.591 ], 00:06:35.591 "assigned_rate_limits": { 00:06:35.591 "r_mbytes_per_sec": 0, 00:06:35.591 "rw_ios_per_sec": 0, 00:06:35.591 "rw_mbytes_per_sec": 0, 00:06:35.591 "w_mbytes_per_sec": 0 00:06:35.591 }, 00:06:35.591 "block_size": 512, 00:06:35.591 "claimed": false, 00:06:35.591 "driver_specific": { 00:06:35.591 "passthru": { 00:06:35.591 "base_bdev_name": "Malloc3", 00:06:35.591 "name": "Passthru0" 00:06:35.591 } 00:06:35.591 }, 00:06:35.591 "memory_domains": [ 00:06:35.591 { 00:06:35.591 "dma_device_id": "system", 00:06:35.591 "dma_device_type": 1 00:06:35.591 }, 00:06:35.591 { 00:06:35.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.591 "dma_device_type": 2 00:06:35.591 } 00:06:35.591 ], 00:06:35.591 "name": "Passthru0", 00:06:35.591 "num_blocks": 16384, 00:06:35.591 "product_name": "passthru", 00:06:35.591 "supported_io_types": { 00:06:35.591 "abort": true, 00:06:35.591 "compare": false, 00:06:35.591 "compare_and_write": false, 00:06:35.591 "copy": true, 00:06:35.591 "flush": true, 00:06:35.591 "get_zone_info": false, 00:06:35.591 "nvme_admin": false, 00:06:35.591 "nvme_io": false, 00:06:35.591 "nvme_io_md": false, 00:06:35.591 "nvme_iov_md": false, 00:06:35.591 "read": true, 00:06:35.591 "reset": true, 00:06:35.591 "seek_data": false, 00:06:35.591 "seek_hole": false, 00:06:35.591 "unmap": true, 00:06:35.591 "write": true, 00:06:35.591 "write_zeroes": true, 00:06:35.591 "zcopy": true, 00:06:35.591 "zone_append": false, 00:06:35.591 "zone_management": false 00:06:35.591 }, 00:06:35.591 "uuid": "6234bf4e-4db5-559b-80b4-10ef7b63c811", 00:06:35.591 "zoned": false 00:06:35.591 } 00:06:35.591 ]' 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:35.591 00:06:35.591 real 0m0.328s 00:06:35.591 user 0m0.215s 00:06:35.591 sys 0m0.045s 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.591 ************************************ 00:06:35.591 END TEST rpc_daemon_integrity 00:06:35.591 ************************************ 00:06:35.591 18:26:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.591 18:26:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:35.591 18:26:51 rpc -- rpc/rpc.sh@84 -- # killprocess 71551 00:06:35.591 18:26:51 rpc -- common/autotest_common.sh@950 -- # '[' -z 71551 ']' 00:06:35.591 18:26:51 rpc -- common/autotest_common.sh@954 -- # kill -0 71551 00:06:35.591 18:26:51 rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.591 18:26:51 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.591 18:26:51 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71551 00:06:35.850 killing process with pid 71551 00:06:35.850 18:26:51 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.850 18:26:51 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.850 18:26:51 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71551' 00:06:35.850 18:26:51 rpc -- common/autotest_common.sh@969 -- # kill 71551 00:06:35.850 18:26:51 rpc -- common/autotest_common.sh@974 -- # wait 71551 00:06:36.418 00:06:36.418 real 0m2.986s 00:06:36.418 user 0m3.731s 00:06:36.418 sys 0m0.871s 00:06:36.418 18:26:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.418 ************************************ 00:06:36.418 END TEST rpc 00:06:36.418 ************************************ 00:06:36.418 18:26:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 18:26:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:36.418 18:26:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.418 18:26:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.418 18:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 ************************************ 00:06:36.418 START TEST skip_rpc 00:06:36.418 ************************************ 00:06:36.418 18:26:51 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:36.418 * Looking for test storage... 00:06:36.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.418 18:26:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.418 --rc genhtml_branch_coverage=1 00:06:36.418 --rc genhtml_function_coverage=1 00:06:36.418 --rc genhtml_legend=1 00:06:36.418 --rc geninfo_all_blocks=1 00:06:36.418 --rc geninfo_unexecuted_blocks=1 00:06:36.418 00:06:36.418 ' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.418 --rc genhtml_branch_coverage=1 00:06:36.418 --rc genhtml_function_coverage=1 00:06:36.418 --rc genhtml_legend=1 00:06:36.418 --rc geninfo_all_blocks=1 00:06:36.418 --rc geninfo_unexecuted_blocks=1 00:06:36.418 00:06:36.418 ' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.418 --rc genhtml_branch_coverage=1 00:06:36.418 --rc genhtml_function_coverage=1 00:06:36.418 --rc genhtml_legend=1 00:06:36.418 --rc geninfo_all_blocks=1 00:06:36.418 --rc geninfo_unexecuted_blocks=1 00:06:36.418 00:06:36.418 ' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.418 --rc genhtml_branch_coverage=1 00:06:36.418 --rc genhtml_function_coverage=1 00:06:36.418 --rc genhtml_legend=1 00:06:36.418 --rc geninfo_all_blocks=1 00:06:36.418 --rc geninfo_unexecuted_blocks=1 00:06:36.418 00:06:36.418 ' 00:06:36.418 18:26:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.418 18:26:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:36.418 18:26:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.418 18:26:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 ************************************ 00:06:36.418 START TEST skip_rpc 00:06:36.418 ************************************ 00:06:36.418 18:26:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:36.418 18:26:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71812 00:06:36.418 18:26:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.418 18:26:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:36.419 18:26:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:36.677 [2024-12-14 18:26:52.207382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.677 [2024-12-14 18:26:52.207490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71812 ] 00:06:36.677 [2024-12-14 18:26:52.349938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.936 [2024-12-14 18:26:52.432978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.209 2024/12/14 18:26:57 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71812 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 71812 ']' 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 71812 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71812 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.209 killing process with pid 71812 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71812' 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 71812 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 71812 00:06:42.209 00:06:42.209 real 0m5.544s 00:06:42.209 user 0m5.073s 00:06:42.209 sys 0m0.383s 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.209 18:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.209 ************************************ 00:06:42.209 END TEST skip_rpc 00:06:42.209 ************************************ 00:06:42.209 18:26:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:42.209 18:26:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.209 18:26:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.209 18:26:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.209 ************************************ 00:06:42.209 START TEST skip_rpc_with_json 00:06:42.209 ************************************ 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71905 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71905 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71905 ']' 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.209 18:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.209 [2024-12-14 18:26:57.784436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.209 [2024-12-14 18:26:57.784536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71905 ] 00:06:42.209 [2024-12-14 18:26:57.908950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.472 [2024-12-14 18:26:57.997399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.067 [2024-12-14 18:26:58.749672] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:43.067 2024/12/14 18:26:58 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:06:43.067 request: 00:06:43.067 { 00:06:43.067 "method": "nvmf_get_transports", 00:06:43.067 "params": { 00:06:43.067 "trtype": "tcp" 00:06:43.067 } 00:06:43.067 } 00:06:43.067 Got JSON-RPC error response 00:06:43.067 GoRPCClient: error on JSON-RPC call 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.067 [2024-12-14 18:26:58.761758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.067 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.326 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.326 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.326 { 00:06:43.326 "subsystems": [ 00:06:43.326 { 00:06:43.326 "subsystem": "fsdev", 00:06:43.326 "config": [ 00:06:43.326 { 00:06:43.326 "method": "fsdev_set_opts", 00:06:43.326 "params": { 00:06:43.326 "fsdev_io_cache_size": 256, 00:06:43.326 "fsdev_io_pool_size": 65535 00:06:43.326 } 00:06:43.326 } 00:06:43.326 ] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "vfio_user_target", 00:06:43.326 "config": null 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "keyring", 00:06:43.326 "config": [] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "iobuf", 00:06:43.326 "config": [ 00:06:43.326 { 00:06:43.326 "method": "iobuf_set_options", 00:06:43.326 "params": { 00:06:43.326 "large_bufsize": 135168, 00:06:43.326 "large_pool_count": 1024, 00:06:43.326 "small_bufsize": 8192, 00:06:43.326 "small_pool_count": 8192 00:06:43.326 } 00:06:43.326 } 00:06:43.326 ] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "sock", 00:06:43.326 "config": [ 00:06:43.326 { 00:06:43.326 "method": "sock_set_default_impl", 00:06:43.326 "params": { 00:06:43.326 "impl_name": "posix" 00:06:43.326 } 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "method": "sock_impl_set_options", 00:06:43.326 "params": { 00:06:43.326 "enable_ktls": false, 00:06:43.326 "enable_placement_id": 0, 00:06:43.326 "enable_quickack": false, 00:06:43.326 "enable_recv_pipe": true, 00:06:43.326 "enable_zerocopy_send_client": false, 00:06:43.326 "enable_zerocopy_send_server": true, 00:06:43.326 "impl_name": "ssl", 00:06:43.326 "recv_buf_size": 4096, 00:06:43.326 "send_buf_size": 4096, 00:06:43.326 "tls_version": 0, 00:06:43.326 "zerocopy_threshold": 0 00:06:43.326 } 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "method": "sock_impl_set_options", 00:06:43.326 "params": { 00:06:43.326 "enable_ktls": false, 00:06:43.326 "enable_placement_id": 0, 00:06:43.326 "enable_quickack": false, 00:06:43.326 "enable_recv_pipe": true, 00:06:43.326 "enable_zerocopy_send_client": false, 00:06:43.326 "enable_zerocopy_send_server": true, 00:06:43.326 "impl_name": "posix", 00:06:43.326 "recv_buf_size": 2097152, 00:06:43.326 "send_buf_size": 2097152, 00:06:43.326 "tls_version": 0, 00:06:43.326 "zerocopy_threshold": 0 00:06:43.326 } 00:06:43.326 } 00:06:43.326 ] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "vmd", 00:06:43.326 "config": [] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "accel", 00:06:43.326 "config": [ 00:06:43.326 { 00:06:43.326 "method": "accel_set_options", 00:06:43.326 "params": { 00:06:43.326 "buf_count": 2048, 00:06:43.326 "large_cache_size": 16, 00:06:43.326 "sequence_count": 2048, 00:06:43.326 "small_cache_size": 128, 00:06:43.326 "task_count": 2048 00:06:43.326 } 00:06:43.326 } 00:06:43.326 ] 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "subsystem": "bdev", 00:06:43.326 "config": [ 00:06:43.326 { 00:06:43.326 "method": "bdev_set_options", 00:06:43.326 "params": { 00:06:43.326 "bdev_auto_examine": true, 00:06:43.326 "bdev_io_cache_size": 256, 00:06:43.326 "bdev_io_pool_size": 65535, 00:06:43.326 "iobuf_large_cache_size": 16, 00:06:43.326 "iobuf_small_cache_size": 128 00:06:43.326 } 00:06:43.326 }, 00:06:43.326 { 00:06:43.326 "method": "bdev_raid_set_options", 00:06:43.326 "params": { 00:06:43.326 "process_max_bandwidth_mb_sec": 0, 00:06:43.327 "process_window_size_kb": 1024 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "bdev_iscsi_set_options", 00:06:43.327 "params": { 00:06:43.327 "timeout_sec": 30 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "bdev_nvme_set_options", 00:06:43.327 "params": { 00:06:43.327 "action_on_timeout": "none", 00:06:43.327 "allow_accel_sequence": false, 00:06:43.327 "arbitration_burst": 0, 00:06:43.327 "bdev_retry_count": 3, 00:06:43.327 "ctrlr_loss_timeout_sec": 0, 00:06:43.327 "delay_cmd_submit": true, 00:06:43.327 "dhchap_dhgroups": [ 00:06:43.327 "null", 00:06:43.327 "ffdhe2048", 00:06:43.327 "ffdhe3072", 00:06:43.327 "ffdhe4096", 00:06:43.327 "ffdhe6144", 00:06:43.327 "ffdhe8192" 00:06:43.327 ], 00:06:43.327 "dhchap_digests": [ 00:06:43.327 "sha256", 00:06:43.327 "sha384", 00:06:43.327 "sha512" 00:06:43.327 ], 00:06:43.327 "disable_auto_failback": false, 00:06:43.327 "fast_io_fail_timeout_sec": 0, 00:06:43.327 "generate_uuids": false, 00:06:43.327 "high_priority_weight": 0, 00:06:43.327 "io_path_stat": false, 00:06:43.327 "io_queue_requests": 0, 00:06:43.327 "keep_alive_timeout_ms": 10000, 00:06:43.327 "low_priority_weight": 0, 00:06:43.327 "medium_priority_weight": 0, 00:06:43.327 "nvme_adminq_poll_period_us": 10000, 00:06:43.327 "nvme_error_stat": false, 00:06:43.327 "nvme_ioq_poll_period_us": 0, 00:06:43.327 "rdma_cm_event_timeout_ms": 0, 00:06:43.327 "rdma_max_cq_size": 0, 00:06:43.327 "rdma_srq_size": 0, 00:06:43.327 "reconnect_delay_sec": 0, 00:06:43.327 "timeout_admin_us": 0, 00:06:43.327 "timeout_us": 0, 00:06:43.327 "transport_ack_timeout": 0, 00:06:43.327 "transport_retry_count": 4, 00:06:43.327 "transport_tos": 0 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "bdev_nvme_set_hotplug", 00:06:43.327 "params": { 00:06:43.327 "enable": false, 00:06:43.327 "period_us": 100000 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "bdev_wait_for_examine" 00:06:43.327 } 00:06:43.327 ] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "scsi", 00:06:43.327 "config": null 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "scheduler", 00:06:43.327 "config": [ 00:06:43.327 { 00:06:43.327 "method": "framework_set_scheduler", 00:06:43.327 "params": { 00:06:43.327 "name": "static" 00:06:43.327 } 00:06:43.327 } 00:06:43.327 ] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "vhost_scsi", 00:06:43.327 "config": [] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "vhost_blk", 00:06:43.327 "config": [] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "ublk", 00:06:43.327 "config": [] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "nbd", 00:06:43.327 "config": [] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "nvmf", 00:06:43.327 "config": [ 00:06:43.327 { 00:06:43.327 "method": "nvmf_set_config", 00:06:43.327 "params": { 00:06:43.327 "admin_cmd_passthru": { 00:06:43.327 "identify_ctrlr": false 00:06:43.327 }, 00:06:43.327 "dhchap_dhgroups": [ 00:06:43.327 "null", 00:06:43.327 "ffdhe2048", 00:06:43.327 "ffdhe3072", 00:06:43.327 "ffdhe4096", 00:06:43.327 "ffdhe6144", 00:06:43.327 "ffdhe8192" 00:06:43.327 ], 00:06:43.327 "dhchap_digests": [ 00:06:43.327 "sha256", 00:06:43.327 "sha384", 00:06:43.327 "sha512" 00:06:43.327 ], 00:06:43.327 "discovery_filter": "match_any" 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "nvmf_set_max_subsystems", 00:06:43.327 "params": { 00:06:43.327 "max_subsystems": 1024 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "nvmf_set_crdt", 00:06:43.327 "params": { 00:06:43.327 "crdt1": 0, 00:06:43.327 "crdt2": 0, 00:06:43.327 "crdt3": 0 00:06:43.327 } 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "method": "nvmf_create_transport", 00:06:43.327 "params": { 00:06:43.327 "abort_timeout_sec": 1, 00:06:43.327 "ack_timeout": 0, 00:06:43.327 "buf_cache_size": 4294967295, 00:06:43.327 "c2h_success": true, 00:06:43.327 "data_wr_pool_size": 0, 00:06:43.327 "dif_insert_or_strip": false, 00:06:43.327 "in_capsule_data_size": 4096, 00:06:43.327 "io_unit_size": 131072, 00:06:43.327 "max_aq_depth": 128, 00:06:43.327 "max_io_qpairs_per_ctrlr": 127, 00:06:43.327 "max_io_size": 131072, 00:06:43.327 "max_queue_depth": 128, 00:06:43.327 "num_shared_buffers": 511, 00:06:43.327 "sock_priority": 0, 00:06:43.327 "trtype": "TCP", 00:06:43.327 "zcopy": false 00:06:43.327 } 00:06:43.327 } 00:06:43.327 ] 00:06:43.327 }, 00:06:43.327 { 00:06:43.327 "subsystem": "iscsi", 00:06:43.327 "config": [ 00:06:43.327 { 00:06:43.327 "method": "iscsi_set_options", 00:06:43.327 "params": { 00:06:43.327 "allow_duplicated_isid": false, 00:06:43.327 "chap_group": 0, 00:06:43.327 "data_out_pool_size": 2048, 00:06:43.327 "default_time2retain": 20, 00:06:43.327 "default_time2wait": 2, 00:06:43.327 "disable_chap": false, 00:06:43.327 "error_recovery_level": 0, 00:06:43.327 "first_burst_length": 8192, 00:06:43.327 "immediate_data": true, 00:06:43.327 "immediate_data_pool_size": 16384, 00:06:43.327 "max_connections_per_session": 2, 00:06:43.327 "max_large_datain_per_connection": 64, 00:06:43.327 "max_queue_depth": 64, 00:06:43.327 "max_r2t_per_connection": 4, 00:06:43.327 "max_sessions": 128, 00:06:43.327 "mutual_chap": false, 00:06:43.327 "node_base": "iqn.2016-06.io.spdk", 00:06:43.327 "nop_in_interval": 30, 00:06:43.327 "nop_timeout": 60, 00:06:43.327 "pdu_pool_size": 36864, 00:06:43.327 "require_chap": false 00:06:43.327 } 00:06:43.327 } 00:06:43.327 ] 00:06:43.327 } 00:06:43.327 ] 00:06:43.327 } 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71905 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71905 ']' 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71905 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71905 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.327 killing process with pid 71905 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71905' 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71905 00:06:43.327 18:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71905 00:06:43.894 18:26:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71944 00:06:43.894 18:26:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.894 18:26:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71944 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71944 ']' 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71944 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71944 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.161 killing process with pid 71944 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71944' 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71944 00:06:49.161 18:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71944 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:49.420 00:06:49.420 real 0m7.307s 00:06:49.420 user 0m6.837s 00:06:49.420 sys 0m0.860s 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.420 ************************************ 00:06:49.420 END TEST skip_rpc_with_json 00:06:49.420 ************************************ 00:06:49.420 18:27:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:49.420 18:27:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.420 18:27:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.420 18:27:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.420 ************************************ 00:06:49.420 START TEST skip_rpc_with_delay 00:06:49.420 ************************************ 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:49.420 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.679 [2024-12-14 18:27:05.179246] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:49.679 [2024-12-14 18:27:05.179389] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.679 00:06:49.679 real 0m0.106s 00:06:49.679 user 0m0.067s 00:06:49.679 sys 0m0.038s 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.679 ************************************ 00:06:49.679 END TEST skip_rpc_with_delay 00:06:49.679 18:27:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:49.679 ************************************ 00:06:49.679 18:27:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:49.679 18:27:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:49.679 18:27:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:49.679 18:27:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.679 18:27:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.679 18:27:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.679 ************************************ 00:06:49.679 START TEST exit_on_failed_rpc_init 00:06:49.679 ************************************ 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72059 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72059 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 72059 ']' 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.679 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:49.679 [2024-12-14 18:27:05.331437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.679 [2024-12-14 18:27:05.331527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72059 ] 00:06:49.937 [2024-12-14 18:27:05.470512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.937 [2024-12-14 18:27:05.535149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:50.196 18:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.196 [2024-12-14 18:27:05.939151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.196 [2024-12-14 18:27:05.939268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72076 ] 00:06:50.454 [2024-12-14 18:27:06.081355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.455 [2024-12-14 18:27:06.160532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.455 [2024-12-14 18:27:06.160679] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:50.455 [2024-12-14 18:27:06.160701] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:50.455 [2024-12-14 18:27:06.160712] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72059 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 72059 ']' 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 72059 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72059 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.713 killing process with pid 72059 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72059' 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 72059 00:06:50.713 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 72059 00:06:51.281 00:06:51.281 real 0m1.561s 00:06:51.281 user 0m1.611s 00:06:51.281 sys 0m0.497s 00:06:51.281 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.281 18:27:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.281 ************************************ 00:06:51.281 END TEST exit_on_failed_rpc_init 00:06:51.281 ************************************ 00:06:51.281 18:27:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:51.281 00:06:51.281 real 0m14.939s 00:06:51.281 user 0m13.794s 00:06:51.281 sys 0m1.976s 00:06:51.281 18:27:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.281 18:27:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.281 ************************************ 00:06:51.281 END TEST skip_rpc 00:06:51.281 ************************************ 00:06:51.281 18:27:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.281 18:27:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.281 18:27:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.281 18:27:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.281 ************************************ 00:06:51.281 START TEST rpc_client 00:06:51.281 ************************************ 00:06:51.281 18:27:06 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.281 * Looking for test storage... 00:06:51.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:51.281 18:27:07 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.281 18:27:07 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.281 18:27:07 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.539 18:27:07 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.539 18:27:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:51.539 18:27:07 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.539 18:27:07 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.539 --rc genhtml_branch_coverage=1 00:06:51.539 --rc genhtml_function_coverage=1 00:06:51.539 --rc genhtml_legend=1 00:06:51.539 --rc geninfo_all_blocks=1 00:06:51.539 --rc geninfo_unexecuted_blocks=1 00:06:51.539 00:06:51.539 ' 00:06:51.539 18:27:07 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.539 --rc genhtml_branch_coverage=1 00:06:51.539 --rc genhtml_function_coverage=1 00:06:51.539 --rc genhtml_legend=1 00:06:51.539 --rc geninfo_all_blocks=1 00:06:51.539 --rc geninfo_unexecuted_blocks=1 00:06:51.539 00:06:51.539 ' 00:06:51.539 18:27:07 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.539 --rc genhtml_branch_coverage=1 00:06:51.539 --rc genhtml_function_coverage=1 00:06:51.539 --rc genhtml_legend=1 00:06:51.539 --rc geninfo_all_blocks=1 00:06:51.539 --rc geninfo_unexecuted_blocks=1 00:06:51.539 00:06:51.539 ' 00:06:51.540 18:27:07 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.540 --rc genhtml_branch_coverage=1 00:06:51.540 --rc genhtml_function_coverage=1 00:06:51.540 --rc genhtml_legend=1 00:06:51.540 --rc geninfo_all_blocks=1 00:06:51.540 --rc geninfo_unexecuted_blocks=1 00:06:51.540 00:06:51.540 ' 00:06:51.540 18:27:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:51.540 OK 00:06:51.540 18:27:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:51.540 00:06:51.540 real 0m0.211s 00:06:51.540 user 0m0.123s 00:06:51.540 sys 0m0.099s 00:06:51.540 18:27:07 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.540 18:27:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:51.540 ************************************ 00:06:51.540 END TEST rpc_client 00:06:51.540 ************************************ 00:06:51.540 18:27:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.540 18:27:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.540 18:27:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.540 18:27:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.540 ************************************ 00:06:51.540 START TEST json_config 00:06:51.540 ************************************ 00:06:51.540 18:27:07 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.540 18:27:07 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.540 18:27:07 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.540 18:27:07 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.799 18:27:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.799 18:27:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.799 18:27:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.799 18:27:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.799 18:27:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.799 18:27:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:51.799 18:27:07 json_config -- scripts/common.sh@345 -- # : 1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.799 18:27:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.799 18:27:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@353 -- # local d=1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.799 18:27:07 json_config -- scripts/common.sh@355 -- # echo 1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.799 18:27:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@353 -- # local d=2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.799 18:27:07 json_config -- scripts/common.sh@355 -- # echo 2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.799 18:27:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.799 18:27:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.799 18:27:07 json_config -- scripts/common.sh@368 -- # return 0 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.799 --rc genhtml_branch_coverage=1 00:06:51.799 --rc genhtml_function_coverage=1 00:06:51.799 --rc genhtml_legend=1 00:06:51.799 --rc geninfo_all_blocks=1 00:06:51.799 --rc geninfo_unexecuted_blocks=1 00:06:51.799 00:06:51.799 ' 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.799 --rc genhtml_branch_coverage=1 00:06:51.799 --rc genhtml_function_coverage=1 00:06:51.799 --rc genhtml_legend=1 00:06:51.799 --rc geninfo_all_blocks=1 00:06:51.799 --rc geninfo_unexecuted_blocks=1 00:06:51.799 00:06:51.799 ' 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.799 --rc genhtml_branch_coverage=1 00:06:51.799 --rc genhtml_function_coverage=1 00:06:51.799 --rc genhtml_legend=1 00:06:51.799 --rc geninfo_all_blocks=1 00:06:51.799 --rc geninfo_unexecuted_blocks=1 00:06:51.799 00:06:51.799 ' 00:06:51.799 18:27:07 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.799 --rc genhtml_branch_coverage=1 00:06:51.799 --rc genhtml_function_coverage=1 00:06:51.799 --rc genhtml_legend=1 00:06:51.799 --rc geninfo_all_blocks=1 00:06:51.799 --rc geninfo_unexecuted_blocks=1 00:06:51.799 00:06:51.799 ' 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.799 18:27:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.799 18:27:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.799 18:27:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.799 18:27:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.799 18:27:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.799 18:27:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.799 18:27:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.799 18:27:07 json_config -- paths/export.sh@5 -- # export PATH 00:06:51.799 18:27:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@51 -- # : 0 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.799 18:27:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:51.799 18:27:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:51.800 INFO: JSON configuration test init 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.800 18:27:07 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:51.800 18:27:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:51.800 18:27:07 json_config -- json_config/common.sh@10 -- # shift 00:06:51.800 18:27:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:51.800 18:27:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:51.800 18:27:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:51.800 18:27:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.800 18:27:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.800 18:27:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72215 00:06:51.800 18:27:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:51.800 Waiting for target to run... 00:06:51.800 18:27:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:51.800 18:27:07 json_config -- json_config/common.sh@25 -- # waitforlisten 72215 /var/tmp/spdk_tgt.sock 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 72215 ']' 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.800 18:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.800 [2024-12-14 18:27:07.474689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.800 [2024-12-14 18:27:07.474799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 00:06:52.367 [2024-12-14 18:27:07.971019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.367 [2024-12-14 18:27:08.057737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:52.934 18:27:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:52.934 00:06:52.934 18:27:08 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:52.934 18:27:08 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.934 18:27:08 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:52.934 18:27:08 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.934 18:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.934 18:27:08 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:52.935 18:27:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:52.935 18:27:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:53.502 18:27:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.502 18:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:53.502 18:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:53.502 18:27:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@54 -- # sort 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:53.762 18:27:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.762 18:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:53.762 18:27:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.762 18:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:53.762 18:27:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:53.762 18:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:54.021 MallocForNvmf0 00:06:54.021 18:27:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:54.021 18:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:54.280 MallocForNvmf1 00:06:54.280 18:27:09 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:54.280 18:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:54.539 [2024-12-14 18:27:10.159126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.539 18:27:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.539 18:27:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.797 18:27:10 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:54.797 18:27:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:55.056 18:27:10 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:55.056 18:27:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:55.325 18:27:10 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:55.325 18:27:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:55.599 [2024-12-14 18:27:11.219577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:55.599 18:27:11 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:55.599 18:27:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.599 18:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.599 18:27:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:55.599 18:27:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.599 18:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.599 18:27:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:55.599 18:27:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:55.599 18:27:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:56.166 MallocBdevForConfigChangeCheck 00:06:56.166 18:27:11 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:56.166 18:27:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.166 18:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.166 18:27:11 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:56.166 18:27:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:56.425 INFO: shutting down applications... 00:06:56.425 18:27:12 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:56.425 18:27:12 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:56.425 18:27:12 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:56.425 18:27:12 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:56.425 18:27:12 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:56.683 Calling clear_iscsi_subsystem 00:06:56.683 Calling clear_nvmf_subsystem 00:06:56.683 Calling clear_nbd_subsystem 00:06:56.683 Calling clear_ublk_subsystem 00:06:56.683 Calling clear_vhost_blk_subsystem 00:06:56.683 Calling clear_vhost_scsi_subsystem 00:06:56.683 Calling clear_bdev_subsystem 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:56.683 18:27:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:57.250 18:27:12 json_config -- json_config/json_config.sh@352 -- # break 00:06:57.250 18:27:12 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:57.250 18:27:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:57.250 18:27:12 json_config -- json_config/common.sh@31 -- # local app=target 00:06:57.250 18:27:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:57.250 18:27:12 json_config -- json_config/common.sh@35 -- # [[ -n 72215 ]] 00:06:57.250 18:27:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72215 00:06:57.250 18:27:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:57.250 18:27:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.250 18:27:12 json_config -- json_config/common.sh@41 -- # kill -0 72215 00:06:57.250 18:27:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:57.817 18:27:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:57.817 18:27:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.817 18:27:13 json_config -- json_config/common.sh@41 -- # kill -0 72215 00:06:57.817 18:27:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:57.817 18:27:13 json_config -- json_config/common.sh@43 -- # break 00:06:57.817 18:27:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:57.817 SPDK target shutdown done 00:06:57.817 18:27:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:57.817 INFO: relaunching applications... 00:06:57.817 18:27:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:57.818 18:27:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:57.818 18:27:13 json_config -- json_config/common.sh@9 -- # local app=target 00:06:57.818 18:27:13 json_config -- json_config/common.sh@10 -- # shift 00:06:57.818 18:27:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:57.818 18:27:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:57.818 18:27:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:57.818 18:27:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.818 18:27:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.818 18:27:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72495 00:06:57.818 18:27:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:57.818 Waiting for target to run... 00:06:57.818 18:27:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:57.818 18:27:13 json_config -- json_config/common.sh@25 -- # waitforlisten 72495 /var/tmp/spdk_tgt.sock 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@831 -- # '[' -z 72495 ']' 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.818 18:27:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.818 [2024-12-14 18:27:13.552062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.818 [2024-12-14 18:27:13.552161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72495 ] 00:06:58.385 [2024-12-14 18:27:14.073926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.643 [2024-12-14 18:27:14.151247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.901 [2024-12-14 18:27:14.494547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.901 [2024-12-14 18:27:14.526683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:58.901 18:27:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.902 18:27:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:58.902 00:06:58.902 18:27:14 json_config -- json_config/common.sh@26 -- # echo '' 00:06:58.902 18:27:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:58.902 INFO: Checking if target configuration is the same... 00:06:58.902 18:27:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:58.902 18:27:14 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:58.902 18:27:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:58.902 18:27:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.902 + '[' 2 -ne 2 ']' 00:06:58.902 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:58.902 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:58.902 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:58.902 +++ basename /dev/fd/62 00:06:58.902 ++ mktemp /tmp/62.XXX 00:06:58.902 + tmp_file_1=/tmp/62.Ye6 00:06:58.902 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:58.902 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:58.902 + tmp_file_2=/tmp/spdk_tgt_config.json.qPY 00:06:58.902 + ret=0 00:06:58.902 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:59.470 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:59.470 + diff -u /tmp/62.Ye6 /tmp/spdk_tgt_config.json.qPY 00:06:59.470 INFO: JSON config files are the same 00:06:59.470 + echo 'INFO: JSON config files are the same' 00:06:59.470 + rm /tmp/62.Ye6 /tmp/spdk_tgt_config.json.qPY 00:06:59.470 + exit 0 00:06:59.470 18:27:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:59.470 INFO: changing configuration and checking if this can be detected... 00:06:59.470 18:27:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:59.470 18:27:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:59.470 18:27:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:59.728 18:27:15 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:59.729 18:27:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:59.729 18:27:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:59.729 + '[' 2 -ne 2 ']' 00:06:59.729 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:59.729 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:59.729 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:59.729 +++ basename /dev/fd/62 00:06:59.729 ++ mktemp /tmp/62.XXX 00:06:59.729 + tmp_file_1=/tmp/62.Q7S 00:06:59.729 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:59.729 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:59.729 + tmp_file_2=/tmp/spdk_tgt_config.json.YeT 00:06:59.729 + ret=0 00:06:59.729 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:00.295 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:00.295 + diff -u /tmp/62.Q7S /tmp/spdk_tgt_config.json.YeT 00:07:00.295 + ret=1 00:07:00.295 + echo '=== Start of file: /tmp/62.Q7S ===' 00:07:00.295 + cat /tmp/62.Q7S 00:07:00.295 + echo '=== End of file: /tmp/62.Q7S ===' 00:07:00.295 + echo '' 00:07:00.295 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YeT ===' 00:07:00.295 + cat /tmp/spdk_tgt_config.json.YeT 00:07:00.295 + echo '=== End of file: /tmp/spdk_tgt_config.json.YeT ===' 00:07:00.295 + echo '' 00:07:00.295 + rm /tmp/62.Q7S /tmp/spdk_tgt_config.json.YeT 00:07:00.295 + exit 1 00:07:00.295 INFO: configuration change detected. 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@324 -- # [[ -n 72495 ]] 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 18:27:15 json_config -- json_config/json_config.sh@330 -- # killprocess 72495 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@950 -- # '[' -z 72495 ']' 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@954 -- # kill -0 72495 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@955 -- # uname 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72495 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.295 killing process with pid 72495 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72495' 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@969 -- # kill 72495 00:07:00.295 18:27:15 json_config -- common/autotest_common.sh@974 -- # wait 72495 00:07:00.553 18:27:16 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.553 18:27:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:00.553 18:27:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.553 18:27:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.813 18:27:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:00.813 INFO: Success 00:07:00.813 18:27:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:00.813 00:07:00.813 real 0m9.132s 00:07:00.813 user 0m12.967s 00:07:00.813 sys 0m2.115s 00:07:00.813 18:27:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.813 18:27:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.813 ************************************ 00:07:00.813 END TEST json_config 00:07:00.813 ************************************ 00:07:00.813 18:27:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:00.813 18:27:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.813 18:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.813 18:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:00.813 ************************************ 00:07:00.813 START TEST json_config_extra_key 00:07:00.813 ************************************ 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.813 18:27:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.813 --rc genhtml_branch_coverage=1 00:07:00.813 --rc genhtml_function_coverage=1 00:07:00.813 --rc genhtml_legend=1 00:07:00.813 --rc geninfo_all_blocks=1 00:07:00.813 --rc geninfo_unexecuted_blocks=1 00:07:00.813 00:07:00.813 ' 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.813 --rc genhtml_branch_coverage=1 00:07:00.813 --rc genhtml_function_coverage=1 00:07:00.813 --rc genhtml_legend=1 00:07:00.813 --rc geninfo_all_blocks=1 00:07:00.813 --rc geninfo_unexecuted_blocks=1 00:07:00.813 00:07:00.813 ' 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.813 --rc genhtml_branch_coverage=1 00:07:00.813 --rc genhtml_function_coverage=1 00:07:00.813 --rc genhtml_legend=1 00:07:00.813 --rc geninfo_all_blocks=1 00:07:00.813 --rc geninfo_unexecuted_blocks=1 00:07:00.813 00:07:00.813 ' 00:07:00.813 18:27:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.813 --rc genhtml_branch_coverage=1 00:07:00.813 --rc genhtml_function_coverage=1 00:07:00.813 --rc genhtml_legend=1 00:07:00.813 --rc geninfo_all_blocks=1 00:07:00.813 --rc geninfo_unexecuted_blocks=1 00:07:00.813 00:07:00.813 ' 00:07:00.813 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:00.813 18:27:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.814 18:27:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.814 18:27:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.814 18:27:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.814 18:27:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.814 18:27:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.814 18:27:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.814 18:27:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.814 18:27:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:00.814 18:27:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.814 18:27:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:00.814 INFO: launching applications... 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:00.814 18:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72679 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.814 Waiting for target to run... 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72679 /var/tmp/spdk_tgt.sock 00:07:00.814 18:27:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 72679 ']' 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.814 18:27:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:01.072 [2024-12-14 18:27:16.654026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.072 [2024-12-14 18:27:16.654176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:07:01.639 [2024-12-14 18:27:17.201543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.639 [2024-12-14 18:27:17.288935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.897 18:27:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.897 18:27:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:01.897 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:01.897 INFO: shutting down applications... 00:07:01.897 18:27:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:01.897 18:27:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72679 ]] 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72679 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72679 00:07:01.897 18:27:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.464 18:27:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.464 18:27:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.464 18:27:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72679 00:07:02.464 18:27:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72679 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:03.040 SPDK target shutdown done 00:07:03.040 Success 00:07:03.040 18:27:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:03.040 18:27:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:03.040 00:07:03.040 real 0m2.280s 00:07:03.040 user 0m1.682s 00:07:03.040 sys 0m0.609s 00:07:03.040 18:27:18 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.040 18:27:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:03.040 ************************************ 00:07:03.040 END TEST json_config_extra_key 00:07:03.040 ************************************ 00:07:03.040 18:27:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:03.040 18:27:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.040 18:27:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.040 18:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.040 ************************************ 00:07:03.040 START TEST alias_rpc 00:07:03.040 ************************************ 00:07:03.040 18:27:18 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:03.040 * Looking for test storage... 00:07:03.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:03.040 18:27:18 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.300 18:27:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.300 --rc genhtml_branch_coverage=1 00:07:03.300 --rc genhtml_function_coverage=1 00:07:03.300 --rc genhtml_legend=1 00:07:03.300 --rc geninfo_all_blocks=1 00:07:03.300 --rc geninfo_unexecuted_blocks=1 00:07:03.300 00:07:03.300 ' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.300 --rc genhtml_branch_coverage=1 00:07:03.300 --rc genhtml_function_coverage=1 00:07:03.300 --rc genhtml_legend=1 00:07:03.300 --rc geninfo_all_blocks=1 00:07:03.300 --rc geninfo_unexecuted_blocks=1 00:07:03.300 00:07:03.300 ' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.300 --rc genhtml_branch_coverage=1 00:07:03.300 --rc genhtml_function_coverage=1 00:07:03.300 --rc genhtml_legend=1 00:07:03.300 --rc geninfo_all_blocks=1 00:07:03.300 --rc geninfo_unexecuted_blocks=1 00:07:03.300 00:07:03.300 ' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.300 --rc genhtml_branch_coverage=1 00:07:03.300 --rc genhtml_function_coverage=1 00:07:03.300 --rc genhtml_legend=1 00:07:03.300 --rc geninfo_all_blocks=1 00:07:03.300 --rc geninfo_unexecuted_blocks=1 00:07:03.300 00:07:03.300 ' 00:07:03.300 18:27:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.300 18:27:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72775 00:07:03.300 18:27:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72775 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 72775 ']' 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.300 18:27:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.300 18:27:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.300 [2024-12-14 18:27:18.963969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.300 [2024-12-14 18:27:18.964064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:07:03.559 [2024-12-14 18:27:19.103204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.559 [2024-12-14 18:27:19.194245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.494 18:27:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.494 18:27:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.494 18:27:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:04.494 18:27:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72775 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 72775 ']' 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 72775 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72775 00:07:04.494 killing process with pid 72775 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72775' 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@969 -- # kill 72775 00:07:04.494 18:27:20 alias_rpc -- common/autotest_common.sh@974 -- # wait 72775 00:07:05.062 00:07:05.062 real 0m2.009s 00:07:05.062 user 0m2.127s 00:07:05.062 sys 0m0.565s 00:07:05.062 18:27:20 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.062 18:27:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.062 ************************************ 00:07:05.062 END TEST alias_rpc 00:07:05.062 ************************************ 00:07:05.062 18:27:20 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:07:05.062 18:27:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:05.062 18:27:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.062 18:27:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.062 18:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.062 ************************************ 00:07:05.062 START TEST dpdk_mem_utility 00:07:05.062 ************************************ 00:07:05.062 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:05.321 * Looking for test storage... 00:07:05.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.321 18:27:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.321 --rc genhtml_branch_coverage=1 00:07:05.321 --rc genhtml_function_coverage=1 00:07:05.321 --rc genhtml_legend=1 00:07:05.321 --rc geninfo_all_blocks=1 00:07:05.321 --rc geninfo_unexecuted_blocks=1 00:07:05.321 00:07:05.321 ' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.321 --rc genhtml_branch_coverage=1 00:07:05.321 --rc genhtml_function_coverage=1 00:07:05.321 --rc genhtml_legend=1 00:07:05.321 --rc geninfo_all_blocks=1 00:07:05.321 --rc geninfo_unexecuted_blocks=1 00:07:05.321 00:07:05.321 ' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.321 --rc genhtml_branch_coverage=1 00:07:05.321 --rc genhtml_function_coverage=1 00:07:05.321 --rc genhtml_legend=1 00:07:05.321 --rc geninfo_all_blocks=1 00:07:05.321 --rc geninfo_unexecuted_blocks=1 00:07:05.321 00:07:05.321 ' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.321 --rc genhtml_branch_coverage=1 00:07:05.321 --rc genhtml_function_coverage=1 00:07:05.321 --rc genhtml_legend=1 00:07:05.321 --rc geninfo_all_blocks=1 00:07:05.321 --rc geninfo_unexecuted_blocks=1 00:07:05.321 00:07:05.321 ' 00:07:05.321 18:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:05.321 18:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72875 00:07:05.321 18:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72875 00:07:05.321 18:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 72875 ']' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.321 18:27:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:05.321 [2024-12-14 18:27:20.975463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.321 [2024-12-14 18:27:20.975538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72875 ] 00:07:05.579 [2024-12-14 18:27:21.107255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.579 [2024-12-14 18:27:21.179390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.837 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.837 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:05.837 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:05.837 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:05.837 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.837 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:05.837 { 00:07:05.837 "filename": "/tmp/spdk_mem_dump.txt" 00:07:05.837 } 00:07:05.837 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.837 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:05.837 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:05.837 1 heaps totaling size 860.000000 MiB 00:07:05.837 size: 860.000000 MiB heap id: 0 00:07:05.837 end heaps---------- 00:07:05.837 9 mempools totaling size 642.649841 MiB 00:07:05.837 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:05.837 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:05.837 size: 92.545471 MiB name: bdev_io_72875 00:07:05.837 size: 51.011292 MiB name: evtpool_72875 00:07:05.837 size: 50.003479 MiB name: msgpool_72875 00:07:05.837 size: 36.509338 MiB name: fsdev_io_72875 00:07:05.837 size: 21.763794 MiB name: PDU_Pool 00:07:05.837 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:05.837 size: 0.026123 MiB name: Session_Pool 00:07:05.837 end mempools------- 00:07:05.837 6 memzones totaling size 4.142822 MiB 00:07:05.837 size: 1.000366 MiB name: RG_ring_0_72875 00:07:05.837 size: 1.000366 MiB name: RG_ring_1_72875 00:07:05.837 size: 1.000366 MiB name: RG_ring_4_72875 00:07:05.837 size: 1.000366 MiB name: RG_ring_5_72875 00:07:05.837 size: 0.125366 MiB name: RG_ring_2_72875 00:07:05.837 size: 0.015991 MiB name: RG_ring_3_72875 00:07:05.837 end memzones------- 00:07:05.837 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:06.119 heap id: 0 total size: 860.000000 MiB number of busy elements: 271 number of free elements: 16 00:07:06.119 list of free elements. size: 13.943115 MiB 00:07:06.119 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:06.119 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:06.119 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:06.119 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:06.119 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:06.119 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:06.119 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:06.119 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:06.119 element at address: 0x200000200000 with size: 0.834839 MiB 00:07:06.119 element at address: 0x20001d800000 with size: 0.572449 MiB 00:07:06.119 element at address: 0x20000d800000 with size: 0.489441 MiB 00:07:06.119 element at address: 0x200003e00000 with size: 0.488831 MiB 00:07:06.119 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:06.119 element at address: 0x200007000000 with size: 0.480286 MiB 00:07:06.119 element at address: 0x20002ac00000 with size: 0.398682 MiB 00:07:06.119 element at address: 0x200003a00000 with size: 0.351562 MiB 00:07:06.119 list of standard malloc elements. size: 199.260193 MiB 00:07:06.119 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:06.119 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:06.119 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:06.119 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:06.119 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:06.119 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:06.119 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:06.119 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:06.119 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:06.119 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a5a000 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a5e4c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7e780 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7e840 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7e900 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003aff880 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:07:06.119 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707af40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b000 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b180 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b240 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b300 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b480 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b540 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:06.120 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892980 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893040 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893100 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893280 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893340 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893400 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893580 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893640 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893700 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893880 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893940 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894000 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894180 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894240 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894300 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894480 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894540 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894600 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894780 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894840 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894900 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d895080 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d895140 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d895200 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac66100 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac661c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6cdc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:07:06.120 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:06.121 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:06.121 list of memzone associated elements. size: 646.796692 MiB 00:07:06.121 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:06.121 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:06.121 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:06.121 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:06.121 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:06.121 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_72875_0 00:07:06.121 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:06.121 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72875_0 00:07:06.121 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:06.121 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72875_0 00:07:06.121 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:06.121 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72875_0 00:07:06.121 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:06.121 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:06.121 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:06.121 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:06.121 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:06.121 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72875 00:07:06.121 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:06.121 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72875 00:07:06.121 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:06.121 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72875 00:07:06.121 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:06.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:06.121 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:06.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:06.121 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:06.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:06.121 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:06.121 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:06.121 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:06.121 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72875 00:07:06.121 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:06.121 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72875 00:07:06.121 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:06.121 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72875 00:07:06.121 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:06.121 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72875 00:07:06.121 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:07:06.121 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72875 00:07:06.121 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:07:06.121 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72875 00:07:06.121 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:06.121 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:06.121 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:06.121 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:06.121 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:06.121 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:06.121 element at address: 0x200003a5e580 with size: 0.125488 MiB 00:07:06.121 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72875 00:07:06.121 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:06.121 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:06.121 element at address: 0x20002ac66280 with size: 0.023743 MiB 00:07:06.121 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:06.121 element at address: 0x200003a5a2c0 with size: 0.016113 MiB 00:07:06.121 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72875 00:07:06.121 element at address: 0x20002ac6c3c0 with size: 0.002441 MiB 00:07:06.121 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:06.121 element at address: 0x2000002d6fc0 with size: 0.000305 MiB 00:07:06.121 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72875 00:07:06.121 element at address: 0x200003aff940 with size: 0.000305 MiB 00:07:06.121 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72875 00:07:06.121 element at address: 0x200003a5a0c0 with size: 0.000305 MiB 00:07:06.121 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72875 00:07:06.121 element at address: 0x20002ac6ce80 with size: 0.000305 MiB 00:07:06.121 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:06.121 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:06.121 18:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72875 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 72875 ']' 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 72875 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72875 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.121 killing process with pid 72875 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72875' 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 72875 00:07:06.121 18:27:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 72875 00:07:06.712 00:07:06.712 real 0m1.401s 00:07:06.712 user 0m1.248s 00:07:06.712 sys 0m0.500s 00:07:06.712 18:27:22 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.712 18:27:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.712 ************************************ 00:07:06.712 END TEST dpdk_mem_utility 00:07:06.712 ************************************ 00:07:06.712 18:27:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.712 18:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.712 18:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.712 18:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:06.712 ************************************ 00:07:06.712 START TEST event 00:07:06.712 ************************************ 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.712 * Looking for test storage... 00:07:06.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.712 18:27:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.712 18:27:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.712 18:27:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.712 18:27:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.712 18:27:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.712 18:27:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.712 18:27:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.712 18:27:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.712 18:27:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.712 18:27:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.712 18:27:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.712 18:27:22 event -- scripts/common.sh@344 -- # case "$op" in 00:07:06.712 18:27:22 event -- scripts/common.sh@345 -- # : 1 00:07:06.712 18:27:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.712 18:27:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.712 18:27:22 event -- scripts/common.sh@365 -- # decimal 1 00:07:06.712 18:27:22 event -- scripts/common.sh@353 -- # local d=1 00:07:06.712 18:27:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.712 18:27:22 event -- scripts/common.sh@355 -- # echo 1 00:07:06.712 18:27:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.712 18:27:22 event -- scripts/common.sh@366 -- # decimal 2 00:07:06.712 18:27:22 event -- scripts/common.sh@353 -- # local d=2 00:07:06.712 18:27:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.712 18:27:22 event -- scripts/common.sh@355 -- # echo 2 00:07:06.712 18:27:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.712 18:27:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.712 18:27:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.712 18:27:22 event -- scripts/common.sh@368 -- # return 0 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.712 --rc genhtml_branch_coverage=1 00:07:06.712 --rc genhtml_function_coverage=1 00:07:06.712 --rc genhtml_legend=1 00:07:06.712 --rc geninfo_all_blocks=1 00:07:06.712 --rc geninfo_unexecuted_blocks=1 00:07:06.712 00:07:06.712 ' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.712 --rc genhtml_branch_coverage=1 00:07:06.712 --rc genhtml_function_coverage=1 00:07:06.712 --rc genhtml_legend=1 00:07:06.712 --rc geninfo_all_blocks=1 00:07:06.712 --rc geninfo_unexecuted_blocks=1 00:07:06.712 00:07:06.712 ' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.712 --rc genhtml_branch_coverage=1 00:07:06.712 --rc genhtml_function_coverage=1 00:07:06.712 --rc genhtml_legend=1 00:07:06.712 --rc geninfo_all_blocks=1 00:07:06.712 --rc geninfo_unexecuted_blocks=1 00:07:06.712 00:07:06.712 ' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.712 --rc genhtml_branch_coverage=1 00:07:06.712 --rc genhtml_function_coverage=1 00:07:06.712 --rc genhtml_legend=1 00:07:06.712 --rc geninfo_all_blocks=1 00:07:06.712 --rc geninfo_unexecuted_blocks=1 00:07:06.712 00:07:06.712 ' 00:07:06.712 18:27:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.712 18:27:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.712 18:27:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:06.712 18:27:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.712 18:27:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.971 ************************************ 00:07:06.971 START TEST event_perf 00:07:06.971 ************************************ 00:07:06.971 18:27:22 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.971 Running I/O for 1 seconds...[2024-12-14 18:27:22.482170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.971 [2024-12-14 18:27:22.482261] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72965 ] 00:07:06.971 [2024-12-14 18:27:22.609330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.971 [2024-12-14 18:27:22.682445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.971 [2024-12-14 18:27:22.682709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.971 Running I/O for 1 seconds...[2024-12-14 18:27:22.682589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.971 [2024-12-14 18:27:22.682710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.346 00:07:08.346 lcore 0: 213110 00:07:08.346 lcore 1: 213109 00:07:08.346 lcore 2: 213109 00:07:08.346 lcore 3: 213110 00:07:08.346 done. 00:07:08.346 00:07:08.346 real 0m1.295s 00:07:08.346 user 0m4.096s 00:07:08.346 sys 0m0.077s 00:07:08.346 18:27:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.346 18:27:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 ************************************ 00:07:08.346 END TEST event_perf 00:07:08.346 ************************************ 00:07:08.346 18:27:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.346 18:27:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:08.346 18:27:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.346 18:27:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 ************************************ 00:07:08.346 START TEST event_reactor 00:07:08.346 ************************************ 00:07:08.346 18:27:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.346 [2024-12-14 18:27:23.833722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.346 [2024-12-14 18:27:23.833838] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:07:08.346 [2024-12-14 18:27:23.965584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.346 [2024-12-14 18:27:24.030869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.721 test_start 00:07:09.721 oneshot 00:07:09.721 tick 100 00:07:09.721 tick 100 00:07:09.721 tick 250 00:07:09.721 tick 100 00:07:09.721 tick 100 00:07:09.721 tick 100 00:07:09.721 tick 250 00:07:09.721 tick 500 00:07:09.721 tick 100 00:07:09.721 tick 100 00:07:09.721 tick 250 00:07:09.721 tick 100 00:07:09.721 tick 100 00:07:09.721 test_end 00:07:09.721 00:07:09.721 real 0m1.281s 00:07:09.721 user 0m1.119s 00:07:09.721 sys 0m0.057s 00:07:09.721 18:27:25 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.721 18:27:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.721 ************************************ 00:07:09.721 END TEST event_reactor 00:07:09.721 ************************************ 00:07:09.721 18:27:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.721 18:27:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:09.721 18:27:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.721 18:27:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.721 ************************************ 00:07:09.721 START TEST event_reactor_perf 00:07:09.721 ************************************ 00:07:09.721 18:27:25 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.721 [2024-12-14 18:27:25.176476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.721 [2024-12-14 18:27:25.176574] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73033 ] 00:07:09.721 [2024-12-14 18:27:25.315355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.721 [2024-12-14 18:27:25.388466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.096 test_start 00:07:11.096 test_end 00:07:11.096 Performance: 460244 events per second 00:07:11.096 00:07:11.096 real 0m1.299s 00:07:11.096 user 0m1.129s 00:07:11.096 sys 0m0.066s 00:07:11.096 18:27:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.096 18:27:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.096 ************************************ 00:07:11.097 END TEST event_reactor_perf 00:07:11.097 ************************************ 00:07:11.097 18:27:26 event -- event/event.sh@49 -- # uname -s 00:07:11.097 18:27:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:11.097 18:27:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.097 18:27:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.097 18:27:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.097 18:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.097 ************************************ 00:07:11.097 START TEST event_scheduler 00:07:11.097 ************************************ 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.097 * Looking for test storage... 00:07:11.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.097 18:27:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:11.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.097 --rc genhtml_branch_coverage=1 00:07:11.097 --rc genhtml_function_coverage=1 00:07:11.097 --rc genhtml_legend=1 00:07:11.097 --rc geninfo_all_blocks=1 00:07:11.097 --rc geninfo_unexecuted_blocks=1 00:07:11.097 00:07:11.097 ' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:11.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.097 --rc genhtml_branch_coverage=1 00:07:11.097 --rc genhtml_function_coverage=1 00:07:11.097 --rc genhtml_legend=1 00:07:11.097 --rc geninfo_all_blocks=1 00:07:11.097 --rc geninfo_unexecuted_blocks=1 00:07:11.097 00:07:11.097 ' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:11.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.097 --rc genhtml_branch_coverage=1 00:07:11.097 --rc genhtml_function_coverage=1 00:07:11.097 --rc genhtml_legend=1 00:07:11.097 --rc geninfo_all_blocks=1 00:07:11.097 --rc geninfo_unexecuted_blocks=1 00:07:11.097 00:07:11.097 ' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:11.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.097 --rc genhtml_branch_coverage=1 00:07:11.097 --rc genhtml_function_coverage=1 00:07:11.097 --rc genhtml_legend=1 00:07:11.097 --rc geninfo_all_blocks=1 00:07:11.097 --rc geninfo_unexecuted_blocks=1 00:07:11.097 00:07:11.097 ' 00:07:11.097 18:27:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:11.097 18:27:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73103 00:07:11.097 18:27:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.097 18:27:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73103 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 73103 ']' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.097 18:27:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.097 18:27:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.097 [2024-12-14 18:27:26.765255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.097 [2024-12-14 18:27:26.765366] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73103 ] 00:07:11.356 [2024-12-14 18:27:26.905652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.356 [2024-12-14 18:27:26.989787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.356 [2024-12-14 18:27:26.990664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.356 [2024-12-14 18:27:26.990745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.356 [2024-12-14 18:27:26.990754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:12.291 18:27:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.291 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.291 POWER: Cannot set governor of lcore 0 to performance 00:07:12.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.291 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.291 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.291 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:12.291 POWER: Unable to set Power Management Environment for lcore 0 00:07:12.291 [2024-12-14 18:27:27.804134] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:12.291 [2024-12-14 18:27:27.804146] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:12.291 [2024-12-14 18:27:27.804154] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:12.291 [2024-12-14 18:27:27.804171] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:12.291 [2024-12-14 18:27:27.804178] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:12.291 [2024-12-14 18:27:27.804185] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.291 18:27:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.291 [2024-12-14 18:27:27.922047] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.291 18:27:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.291 18:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.291 ************************************ 00:07:12.291 START TEST scheduler_create_thread 00:07:12.291 ************************************ 00:07:12.291 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:12.291 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:12.291 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 2 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 3 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 4 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 5 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 6 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 7 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 8 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 9 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 10 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.292 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.859 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.859 18:27:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:12.859 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.859 18:27:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.232 18:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.232 18:27:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:14.232 18:27:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:14.232 18:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.232 18:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.606 18:27:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.606 00:07:15.606 real 0m3.094s 00:07:15.606 user 0m0.012s 00:07:15.606 sys 0m0.006s 00:07:15.606 18:27:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.606 ************************************ 00:07:15.606 18:27:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.606 END TEST scheduler_create_thread 00:07:15.606 ************************************ 00:07:15.606 18:27:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:15.606 18:27:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73103 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 73103 ']' 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 73103 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73103 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:15.606 killing process with pid 73103 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73103' 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 73103 00:07:15.606 18:27:31 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 73103 00:07:15.864 [2024-12-14 18:27:31.409908] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:16.122 00:07:16.122 real 0m5.180s 00:07:16.122 user 0m10.251s 00:07:16.122 sys 0m0.446s 00:07:16.122 18:27:31 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.122 18:27:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.122 ************************************ 00:07:16.122 END TEST event_scheduler 00:07:16.122 ************************************ 00:07:16.122 18:27:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:16.122 18:27:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:16.122 18:27:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.122 18:27:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.122 18:27:31 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.122 ************************************ 00:07:16.122 START TEST app_repeat 00:07:16.122 ************************************ 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73226 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.122 Process app_repeat pid: 73226 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73226' 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.122 spdk_app_start Round 0 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:16.122 18:27:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73226 /var/tmp/spdk-nbd.sock 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73226 ']' 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.122 18:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.122 [2024-12-14 18:27:31.794201] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.122 [2024-12-14 18:27:31.794307] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73226 ] 00:07:16.380 [2024-12-14 18:27:31.928025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.380 [2024-12-14 18:27:31.993884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.380 [2024-12-14 18:27:31.993888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.380 18:27:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.380 18:27:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:16.380 18:27:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.638 Malloc0 00:07:16.895 18:27:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.152 Malloc1 00:07:17.152 18:27:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.152 18:27:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.411 /dev/nbd0 00:07:17.411 18:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.411 18:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.411 1+0 records in 00:07:17.411 1+0 records out 00:07:17.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289027 s, 14.2 MB/s 00:07:17.411 18:27:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.411 18:27:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.411 18:27:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.411 18:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.411 18:27:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.411 18:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.411 18:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.411 18:27:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.669 /dev/nbd1 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.669 1+0 records in 00:07:17.669 1+0 records out 00:07:17.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280247 s, 14.6 MB/s 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.669 18:27:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.669 18:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.933 { 00:07:17.933 "bdev_name": "Malloc0", 00:07:17.933 "nbd_device": "/dev/nbd0" 00:07:17.933 }, 00:07:17.933 { 00:07:17.933 "bdev_name": "Malloc1", 00:07:17.933 "nbd_device": "/dev/nbd1" 00:07:17.933 } 00:07:17.933 ]' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.933 { 00:07:17.933 "bdev_name": "Malloc0", 00:07:17.933 "nbd_device": "/dev/nbd0" 00:07:17.933 }, 00:07:17.933 { 00:07:17.933 "bdev_name": "Malloc1", 00:07:17.933 "nbd_device": "/dev/nbd1" 00:07:17.933 } 00:07:17.933 ]' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.933 /dev/nbd1' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.933 /dev/nbd1' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:17.933 256+0 records in 00:07:17.933 256+0 records out 00:07:17.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726167 s, 144 MB/s 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.933 18:27:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.933 256+0 records in 00:07:17.933 256+0 records out 00:07:17.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192481 s, 54.5 MB/s 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.200 256+0 records in 00:07:18.200 256+0 records out 00:07:18.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024097 s, 43.5 MB/s 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.200 18:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.458 18:27:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.717 18:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.975 18:27:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.975 18:27:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.233 18:27:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.491 [2024-12-14 18:27:35.186342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.491 [2024-12-14 18:27:35.239194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.491 [2024-12-14 18:27:35.239205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.748 [2024-12-14 18:27:35.314211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.748 [2024-12-14 18:27:35.314293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.272 18:27:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.272 spdk_app_start Round 1 00:07:22.272 18:27:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:22.272 18:27:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73226 /var/tmp/spdk-nbd.sock 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73226 ']' 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.272 18:27:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.530 18:27:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.530 18:27:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:22.531 18:27:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.789 Malloc0 00:07:23.047 18:27:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.306 Malloc1 00:07:23.306 18:27:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.306 18:27:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.564 /dev/nbd0 00:07:23.564 18:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.564 18:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.564 1+0 records in 00:07:23.564 1+0 records out 00:07:23.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002411 s, 17.0 MB/s 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.564 18:27:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.564 18:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.564 18:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.564 18:27:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:23.822 /dev/nbd1 00:07:23.822 18:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.822 18:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.822 1+0 records in 00:07:23.822 1+0 records out 00:07:23.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285076 s, 14.4 MB/s 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.822 18:27:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.822 18:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.822 18:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.823 18:27:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.823 18:27:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.823 18:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.081 18:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.081 { 00:07:24.081 "bdev_name": "Malloc0", 00:07:24.081 "nbd_device": "/dev/nbd0" 00:07:24.081 }, 00:07:24.081 { 00:07:24.081 "bdev_name": "Malloc1", 00:07:24.081 "nbd_device": "/dev/nbd1" 00:07:24.081 } 00:07:24.081 ]' 00:07:24.081 18:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.081 { 00:07:24.081 "bdev_name": "Malloc0", 00:07:24.081 "nbd_device": "/dev/nbd0" 00:07:24.081 }, 00:07:24.081 { 00:07:24.081 "bdev_name": "Malloc1", 00:07:24.081 "nbd_device": "/dev/nbd1" 00:07:24.081 } 00:07:24.081 ]' 00:07:24.081 18:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.339 /dev/nbd1' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.339 /dev/nbd1' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.339 256+0 records in 00:07:24.339 256+0 records out 00:07:24.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0095655 s, 110 MB/s 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.339 256+0 records in 00:07:24.339 256+0 records out 00:07:24.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229981 s, 45.6 MB/s 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.339 256+0 records in 00:07:24.339 256+0 records out 00:07:24.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220447 s, 47.6 MB/s 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.339 18:27:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.340 18:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.340 18:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.598 18:27:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.856 18:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.114 18:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.114 18:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.114 18:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.375 18:27:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.375 18:27:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.632 18:27:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.890 [2024-12-14 18:27:41.470390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.890 [2024-12-14 18:27:41.527410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.890 [2024-12-14 18:27:41.527422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.890 [2024-12-14 18:27:41.604101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.890 [2024-12-14 18:27:41.604173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:29.171 spdk_app_start Round 2 00:07:29.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.171 18:27:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:29.171 18:27:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:29.171 18:27:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73226 /var/tmp/spdk-nbd.sock 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73226 ']' 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.171 18:27:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:29.171 18:27:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.171 Malloc0 00:07:29.171 18:27:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.429 Malloc1 00:07:29.429 18:27:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.429 18:27:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.687 /dev/nbd0 00:07:29.687 18:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.687 18:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.687 1+0 records in 00:07:29.687 1+0 records out 00:07:29.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194858 s, 21.0 MB/s 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.687 18:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:29.687 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.687 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.687 18:27:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:29.945 /dev/nbd1 00:07:29.945 18:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:29.945 18:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:29.945 18:27:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:29.945 18:27:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:29.945 18:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.945 18:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.945 18:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.205 1+0 records in 00:07:30.205 1+0 records out 00:07:30.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347561 s, 11.8 MB/s 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.205 18:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.205 { 00:07:30.205 "bdev_name": "Malloc0", 00:07:30.205 "nbd_device": "/dev/nbd0" 00:07:30.205 }, 00:07:30.205 { 00:07:30.205 "bdev_name": "Malloc1", 00:07:30.205 "nbd_device": "/dev/nbd1" 00:07:30.205 } 00:07:30.205 ]' 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.205 { 00:07:30.205 "bdev_name": "Malloc0", 00:07:30.205 "nbd_device": "/dev/nbd0" 00:07:30.205 }, 00:07:30.205 { 00:07:30.205 "bdev_name": "Malloc1", 00:07:30.205 "nbd_device": "/dev/nbd1" 00:07:30.205 } 00:07:30.205 ]' 00:07:30.205 18:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.483 /dev/nbd1' 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.483 /dev/nbd1' 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.483 18:27:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.483 256+0 records in 00:07:30.483 256+0 records out 00:07:30.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107002 s, 98.0 MB/s 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.483 256+0 records in 00:07:30.483 256+0 records out 00:07:30.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222206 s, 47.2 MB/s 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.483 256+0 records in 00:07:30.483 256+0 records out 00:07:30.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248162 s, 42.3 MB/s 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.483 18:27:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.741 18:27:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.000 18:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.258 18:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:31.258 18:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.258 18:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:31.516 18:27:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:31.517 18:27:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:31.776 18:27:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.034 [2024-12-14 18:27:47.646752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.034 [2024-12-14 18:27:47.708095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.034 [2024-12-14 18:27:47.708105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.034 [2024-12-14 18:27:47.784743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.034 [2024-12-14 18:27:47.784819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:35.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.321 18:27:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73226 /var/tmp/spdk-nbd.sock 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73226 ']' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:35.321 18:27:50 event.app_repeat -- event/event.sh@39 -- # killprocess 73226 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 73226 ']' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 73226 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73226 00:07:35.321 killing process with pid 73226 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73226' 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@969 -- # kill 73226 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@974 -- # wait 73226 00:07:35.321 spdk_app_start is called in Round 0. 00:07:35.321 Shutdown signal received, stop current app iteration 00:07:35.321 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.321 spdk_app_start is called in Round 1. 00:07:35.321 Shutdown signal received, stop current app iteration 00:07:35.321 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.321 spdk_app_start is called in Round 2. 00:07:35.321 Shutdown signal received, stop current app iteration 00:07:35.321 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.321 spdk_app_start is called in Round 3. 00:07:35.321 Shutdown signal received, stop current app iteration 00:07:35.321 ************************************ 00:07:35.321 END TEST app_repeat 00:07:35.321 ************************************ 00:07:35.321 18:27:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:35.321 18:27:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:35.321 00:07:35.321 real 0m19.180s 00:07:35.321 user 0m43.251s 00:07:35.321 sys 0m3.273s 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.321 18:27:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.321 18:27:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:35.321 18:27:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.321 18:27:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.321 18:27:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.321 18:27:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.321 ************************************ 00:07:35.321 START TEST cpu_locks 00:07:35.321 ************************************ 00:07:35.321 18:27:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.580 * Looking for test storage... 00:07:35.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.580 18:27:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.580 --rc genhtml_branch_coverage=1 00:07:35.580 --rc genhtml_function_coverage=1 00:07:35.580 --rc genhtml_legend=1 00:07:35.580 --rc geninfo_all_blocks=1 00:07:35.580 --rc geninfo_unexecuted_blocks=1 00:07:35.580 00:07:35.580 ' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.580 --rc genhtml_branch_coverage=1 00:07:35.580 --rc genhtml_function_coverage=1 00:07:35.580 --rc genhtml_legend=1 00:07:35.580 --rc geninfo_all_blocks=1 00:07:35.580 --rc geninfo_unexecuted_blocks=1 00:07:35.580 00:07:35.580 ' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.580 --rc genhtml_branch_coverage=1 00:07:35.580 --rc genhtml_function_coverage=1 00:07:35.580 --rc genhtml_legend=1 00:07:35.580 --rc geninfo_all_blocks=1 00:07:35.580 --rc geninfo_unexecuted_blocks=1 00:07:35.580 00:07:35.580 ' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.580 --rc genhtml_branch_coverage=1 00:07:35.580 --rc genhtml_function_coverage=1 00:07:35.580 --rc genhtml_legend=1 00:07:35.580 --rc geninfo_all_blocks=1 00:07:35.580 --rc geninfo_unexecuted_blocks=1 00:07:35.580 00:07:35.580 ' 00:07:35.580 18:27:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:35.580 18:27:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:35.580 18:27:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:35.580 18:27:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.580 18:27:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.580 ************************************ 00:07:35.580 START TEST default_locks 00:07:35.580 ************************************ 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73861 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73861 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 73861 ']' 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.580 18:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.580 [2024-12-14 18:27:51.274319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.581 [2024-12-14 18:27:51.274411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73861 ] 00:07:35.840 [2024-12-14 18:27:51.416751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.840 [2024-12-14 18:27:51.482365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.775 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.775 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:36.775 18:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73861 00:07:36.775 18:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73861 00:07:36.775 18:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73861 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 73861 ']' 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 73861 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73861 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.034 killing process with pid 73861 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73861' 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 73861 00:07:37.034 18:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 73861 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73861 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 73861 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 73861 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 73861 ']' 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 ERROR: process (pid: 73861) is no longer running 00:07:37.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (73861) - No such process 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:37.602 00:07:37.602 real 0m1.875s 00:07:37.602 user 0m1.977s 00:07:37.602 sys 0m0.532s 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.602 18:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 ************************************ 00:07:37.602 END TEST default_locks 00:07:37.602 ************************************ 00:07:37.602 18:27:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:37.602 18:27:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.602 18:27:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.602 18:27:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 ************************************ 00:07:37.602 START TEST default_locks_via_rpc 00:07:37.602 ************************************ 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73927 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73927 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 73927 ']' 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.602 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 [2024-12-14 18:27:53.195232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.602 [2024-12-14 18:27:53.195310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73927 ] 00:07:37.602 [2024-12-14 18:27:53.333119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.861 [2024-12-14 18:27:53.405744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73927 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73927 00:07:38.120 18:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73927 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 73927 ']' 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 73927 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73927 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.687 killing process with pid 73927 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73927' 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 73927 00:07:38.687 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 73927 00:07:39.254 00:07:39.254 real 0m1.632s 00:07:39.254 user 0m1.514s 00:07:39.254 sys 0m0.630s 00:07:39.254 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.254 18:27:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.254 ************************************ 00:07:39.254 END TEST default_locks_via_rpc 00:07:39.254 ************************************ 00:07:39.254 18:27:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:39.254 18:27:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.254 18:27:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.254 18:27:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.254 ************************************ 00:07:39.254 START TEST non_locking_app_on_locked_coremask 00:07:39.254 ************************************ 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73977 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73977 /var/tmp/spdk.sock 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 73977 ']' 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.254 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.255 18:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.255 [2024-12-14 18:27:54.897674] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.255 [2024-12-14 18:27:54.897762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73977 ] 00:07:39.512 [2024-12-14 18:27:55.036286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.512 [2024-12-14 18:27:55.124384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74006 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 74006 /var/tmp/spdk2.sock 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74006 ']' 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.446 18:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.446 [2024-12-14 18:27:55.940497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.446 [2024-12-14 18:27:55.940588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:07:40.446 [2024-12-14 18:27:56.075653] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.446 [2024-12-14 18:27:56.075701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.704 [2024-12-14 18:27:56.225477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.271 18:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.271 18:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.271 18:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73977 00:07:41.271 18:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73977 00:07:41.271 18:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73977 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 73977 ']' 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 73977 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73977 00:07:42.207 killing process with pid 73977 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73977' 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 73977 00:07:42.207 18:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 73977 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 74006 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74006 ']' 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74006 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.142 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74006 00:07:43.401 killing process with pid 74006 00:07:43.401 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.401 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.401 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74006' 00:07:43.401 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74006 00:07:43.401 18:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74006 00:07:43.659 ************************************ 00:07:43.659 END TEST non_locking_app_on_locked_coremask 00:07:43.659 ************************************ 00:07:43.659 00:07:43.659 real 0m4.588s 00:07:43.659 user 0m4.907s 00:07:43.659 sys 0m1.356s 00:07:43.659 18:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.659 18:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.918 18:27:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:43.918 18:27:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.918 18:27:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.918 18:27:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.918 ************************************ 00:07:43.918 START TEST locking_app_on_unlocked_coremask 00:07:43.918 ************************************ 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74095 00:07:43.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 74095 /var/tmp/spdk.sock 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74095 ']' 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.918 18:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.918 [2024-12-14 18:27:59.539660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.918 [2024-12-14 18:27:59.540087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:07:44.177 [2024-12-14 18:27:59.678709] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:44.177 [2024-12-14 18:27:59.678744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.177 [2024-12-14 18:27:59.761409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74123 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 74123 /var/tmp/spdk2.sock 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74123 ']' 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.126 18:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.126 [2024-12-14 18:28:00.607556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.126 [2024-12-14 18:28:00.607848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74123 ] 00:07:45.126 [2024-12-14 18:28:00.749903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.398 [2024-12-14 18:28:00.891525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.965 18:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.965 18:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.965 18:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 74123 00:07:45.965 18:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.965 18:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74123 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 74095 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74095 ']' 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 74095 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74095 00:07:46.900 killing process with pid 74095 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74095' 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 74095 00:07:46.900 18:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 74095 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 74123 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74123 ']' 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 74123 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74123 00:07:47.836 killing process with pid 74123 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74123' 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 74123 00:07:47.836 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 74123 00:07:48.403 ************************************ 00:07:48.403 END TEST locking_app_on_unlocked_coremask 00:07:48.403 ************************************ 00:07:48.403 00:07:48.403 real 0m4.537s 00:07:48.403 user 0m4.810s 00:07:48.403 sys 0m1.368s 00:07:48.403 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.403 18:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.403 18:28:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:48.403 18:28:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.403 18:28:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.403 18:28:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.403 ************************************ 00:07:48.403 START TEST locking_app_on_locked_coremask 00:07:48.403 ************************************ 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74202 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74202 /var/tmp/spdk.sock 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74202 ']' 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.403 18:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.403 [2024-12-14 18:28:04.135309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.403 [2024-12-14 18:28:04.135790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74202 ] 00:07:48.662 [2024-12-14 18:28:04.268303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.662 [2024-12-14 18:28:04.348672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74230 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74230 /var/tmp/spdk2.sock 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74230 /var/tmp/spdk2.sock 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74230 /var/tmp/spdk2.sock 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74230 ']' 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.598 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.598 [2024-12-14 18:28:05.156656] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.598 [2024-12-14 18:28:05.158029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74230 ] 00:07:49.598 [2024-12-14 18:28:05.302919] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74202 has claimed it. 00:07:49.598 [2024-12-14 18:28:05.302999] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.164 ERROR: process (pid: 74230) is no longer running 00:07:50.164 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74230) - No such process 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74202 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74202 00:07:50.164 18:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74202 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74202 ']' 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74202 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74202 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74202' 00:07:50.423 killing process with pid 74202 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74202 00:07:50.423 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74202 00:07:50.990 ************************************ 00:07:50.990 END TEST locking_app_on_locked_coremask 00:07:50.990 ************************************ 00:07:50.990 00:07:50.990 real 0m2.602s 00:07:50.990 user 0m2.893s 00:07:50.990 sys 0m0.660s 00:07:50.990 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.990 18:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.990 18:28:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:50.990 18:28:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.990 18:28:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.990 18:28:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.990 ************************************ 00:07:50.990 START TEST locking_overlapped_coremask 00:07:50.990 ************************************ 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74287 00:07:50.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74287 /var/tmp/spdk.sock 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74287 ']' 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.990 18:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.249 [2024-12-14 18:28:06.773664] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.249 [2024-12-14 18:28:06.773895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74287 ] 00:07:51.249 [2024-12-14 18:28:06.906356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.249 [2024-12-14 18:28:06.973509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.249 [2024-12-14 18:28:06.973656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.249 [2024-12-14 18:28:06.973656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74304 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74304 /var/tmp/spdk2.sock 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74304 /var/tmp/spdk2.sock 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74304 /var/tmp/spdk2.sock 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74304 ']' 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.817 18:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.817 [2024-12-14 18:28:07.376650] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.817 [2024-12-14 18:28:07.376728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74304 ] 00:07:51.817 [2024-12-14 18:28:07.512224] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74287 has claimed it. 00:07:51.817 [2024-12-14 18:28:07.512293] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:52.385 ERROR: process (pid: 74304) is no longer running 00:07:52.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74304) - No such process 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74287 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 74287 ']' 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 74287 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.385 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74287 00:07:52.643 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.643 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.643 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74287' 00:07:52.643 killing process with pid 74287 00:07:52.643 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 74287 00:07:52.643 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 74287 00:07:53.210 00:07:53.210 real 0m1.999s 00:07:53.210 user 0m5.371s 00:07:53.210 sys 0m0.505s 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.210 ************************************ 00:07:53.210 END TEST locking_overlapped_coremask 00:07:53.210 ************************************ 00:07:53.210 18:28:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:53.210 18:28:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.210 18:28:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.210 18:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.210 ************************************ 00:07:53.210 START TEST locking_overlapped_coremask_via_rpc 00:07:53.210 ************************************ 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74355 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74355 /var/tmp/spdk.sock 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74355 ']' 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.210 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.211 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.211 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.211 18:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.211 [2024-12-14 18:28:08.834967] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.211 [2024-12-14 18:28:08.835210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74355 ] 00:07:53.469 [2024-12-14 18:28:08.959584] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.469 [2024-12-14 18:28:08.959645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.469 [2024-12-14 18:28:09.034932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.469 [2024-12-14 18:28:09.035107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.469 [2024-12-14 18:28:09.035110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74385 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74385 /var/tmp/spdk2.sock 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74385 ']' 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.404 18:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.404 [2024-12-14 18:28:09.869692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.404 [2024-12-14 18:28:09.870040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74385 ] 00:07:54.404 [2024-12-14 18:28:10.018430] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.404 [2024-12-14 18:28:10.018488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.662 [2024-12-14 18:28:10.204089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.662 [2024-12-14 18:28:10.204188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.662 [2024-12-14 18:28:10.204189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 [2024-12-14 18:28:10.930758] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74355 has claimed it. 00:07:55.230 2024/12/14 18:28:10 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:55.230 request: 00:07:55.230 { 00:07:55.230 "method": "framework_enable_cpumask_locks", 00:07:55.230 "params": {} 00:07:55.230 } 00:07:55.230 Got JSON-RPC error response 00:07:55.230 GoRPCClient: error on JSON-RPC call 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74355 /var/tmp/spdk.sock 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74355 ']' 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.230 18:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74385 /var/tmp/spdk2.sock 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74385 ']' 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.489 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:55.747 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:55.747 00:07:55.747 real 0m2.714s 00:07:55.747 user 0m1.415s 00:07:55.748 sys 0m0.234s 00:07:55.748 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.748 18:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.748 ************************************ 00:07:55.748 END TEST locking_overlapped_coremask_via_rpc 00:07:55.748 ************************************ 00:07:56.006 18:28:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:56.006 18:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74355 ]] 00:07:56.006 18:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74355 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74355 ']' 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74355 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74355 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74355' 00:07:56.006 killing process with pid 74355 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74355 00:07:56.006 18:28:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74355 00:07:56.585 18:28:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74385 ]] 00:07:56.585 18:28:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74385 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74385 ']' 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74385 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74385 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:56.585 killing process with pid 74385 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74385' 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74385 00:07:56.585 18:28:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74385 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74355 ]] 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74355 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74355 ']' 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74355 00:07:57.178 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74355) - No such process 00:07:57.178 Process with pid 74355 is not found 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74355 is not found' 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74385 ]] 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74385 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74385 ']' 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74385 00:07:57.178 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74385) - No such process 00:07:57.178 Process with pid 74385 is not found 00:07:57.178 18:28:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74385 is not found' 00:07:57.178 18:28:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.178 00:07:57.178 real 0m21.698s 00:07:57.178 user 0m37.010s 00:07:57.179 sys 0m6.374s 00:07:57.179 18:28:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.179 ************************************ 00:07:57.179 END TEST cpu_locks 00:07:57.179 18:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.179 ************************************ 00:07:57.179 ************************************ 00:07:57.179 END TEST event 00:07:57.179 ************************************ 00:07:57.179 00:07:57.179 real 0m50.518s 00:07:57.179 user 1m37.120s 00:07:57.179 sys 0m10.579s 00:07:57.179 18:28:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.179 18:28:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.179 18:28:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.179 18:28:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.179 18:28:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.179 18:28:12 -- common/autotest_common.sh@10 -- # set +x 00:07:57.179 ************************************ 00:07:57.179 START TEST thread 00:07:57.179 ************************************ 00:07:57.179 18:28:12 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.179 * Looking for test storage... 00:07:57.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:57.179 18:28:12 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.179 18:28:12 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.179 18:28:12 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.438 18:28:12 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.438 18:28:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.438 18:28:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.438 18:28:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.438 18:28:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.438 18:28:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.438 18:28:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.438 18:28:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.438 18:28:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.438 18:28:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.438 18:28:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.438 18:28:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.438 18:28:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:57.438 18:28:12 thread -- scripts/common.sh@345 -- # : 1 00:07:57.438 18:28:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.438 18:28:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.438 18:28:12 thread -- scripts/common.sh@365 -- # decimal 1 00:07:57.438 18:28:12 thread -- scripts/common.sh@353 -- # local d=1 00:07:57.438 18:28:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.438 18:28:12 thread -- scripts/common.sh@355 -- # echo 1 00:07:57.438 18:28:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.438 18:28:12 thread -- scripts/common.sh@366 -- # decimal 2 00:07:57.438 18:28:12 thread -- scripts/common.sh@353 -- # local d=2 00:07:57.438 18:28:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.438 18:28:12 thread -- scripts/common.sh@355 -- # echo 2 00:07:57.438 18:28:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.438 18:28:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.438 18:28:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.438 18:28:12 thread -- scripts/common.sh@368 -- # return 0 00:07:57.438 18:28:12 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.438 18:28:12 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.438 --rc genhtml_branch_coverage=1 00:07:57.438 --rc genhtml_function_coverage=1 00:07:57.438 --rc genhtml_legend=1 00:07:57.438 --rc geninfo_all_blocks=1 00:07:57.438 --rc geninfo_unexecuted_blocks=1 00:07:57.438 00:07:57.438 ' 00:07:57.438 18:28:12 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.438 --rc genhtml_branch_coverage=1 00:07:57.438 --rc genhtml_function_coverage=1 00:07:57.438 --rc genhtml_legend=1 00:07:57.439 --rc geninfo_all_blocks=1 00:07:57.439 --rc geninfo_unexecuted_blocks=1 00:07:57.439 00:07:57.439 ' 00:07:57.439 18:28:12 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.439 --rc genhtml_branch_coverage=1 00:07:57.439 --rc genhtml_function_coverage=1 00:07:57.439 --rc genhtml_legend=1 00:07:57.439 --rc geninfo_all_blocks=1 00:07:57.439 --rc geninfo_unexecuted_blocks=1 00:07:57.439 00:07:57.439 ' 00:07:57.439 18:28:12 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.439 --rc genhtml_branch_coverage=1 00:07:57.439 --rc genhtml_function_coverage=1 00:07:57.439 --rc genhtml_legend=1 00:07:57.439 --rc geninfo_all_blocks=1 00:07:57.439 --rc geninfo_unexecuted_blocks=1 00:07:57.439 00:07:57.439 ' 00:07:57.439 18:28:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.439 18:28:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:57.439 18:28:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.439 18:28:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.439 ************************************ 00:07:57.439 START TEST thread_poller_perf 00:07:57.439 ************************************ 00:07:57.439 18:28:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.439 [2024-12-14 18:28:12.984594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.439 [2024-12-14 18:28:12.984720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74545 ] 00:07:57.439 [2024-12-14 18:28:13.119348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.697 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:57.697 [2024-12-14 18:28:13.197738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.633 [2024-12-14T18:28:14.386Z] ====================================== 00:07:58.633 [2024-12-14T18:28:14.386Z] busy:2205877470 (cyc) 00:07:58.633 [2024-12-14T18:28:14.386Z] total_run_count: 396000 00:07:58.633 [2024-12-14T18:28:14.386Z] tsc_hz: 2200000000 (cyc) 00:07:58.633 [2024-12-14T18:28:14.386Z] ====================================== 00:07:58.633 [2024-12-14T18:28:14.386Z] poller_cost: 5570 (cyc), 2531 (nsec) 00:07:58.633 00:07:58.633 real 0m1.346s 00:07:58.633 user 0m1.166s 00:07:58.633 sys 0m0.074s 00:07:58.633 18:28:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.633 18:28:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.633 ************************************ 00:07:58.633 END TEST thread_poller_perf 00:07:58.633 ************************************ 00:07:58.633 18:28:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.633 18:28:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:58.633 18:28:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.633 18:28:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.633 ************************************ 00:07:58.633 START TEST thread_poller_perf 00:07:58.633 ************************************ 00:07:58.633 18:28:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.633 [2024-12-14 18:28:14.383066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.633 [2024-12-14 18:28:14.383184] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74581 ] 00:07:58.892 [2024-12-14 18:28:14.511748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.892 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:58.892 [2024-12-14 18:28:14.583352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.275 [2024-12-14T18:28:16.028Z] ====================================== 00:08:00.275 [2024-12-14T18:28:16.028Z] busy:2202134994 (cyc) 00:08:00.275 [2024-12-14T18:28:16.028Z] total_run_count: 5191000 00:08:00.275 [2024-12-14T18:28:16.028Z] tsc_hz: 2200000000 (cyc) 00:08:00.275 [2024-12-14T18:28:16.028Z] ====================================== 00:08:00.275 [2024-12-14T18:28:16.028Z] poller_cost: 424 (cyc), 192 (nsec) 00:08:00.275 00:08:00.275 real 0m1.302s 00:08:00.275 user 0m1.130s 00:08:00.275 sys 0m0.066s 00:08:00.275 18:28:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.275 18:28:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.275 ************************************ 00:08:00.275 END TEST thread_poller_perf 00:08:00.275 ************************************ 00:08:00.275 18:28:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:00.275 00:08:00.275 real 0m2.929s 00:08:00.275 user 0m2.425s 00:08:00.275 sys 0m0.289s 00:08:00.275 18:28:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.275 18:28:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.275 ************************************ 00:08:00.275 END TEST thread 00:08:00.275 ************************************ 00:08:00.275 18:28:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:00.275 18:28:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:00.275 18:28:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.275 18:28:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.275 18:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:00.275 ************************************ 00:08:00.275 START TEST app_cmdline 00:08:00.275 ************************************ 00:08:00.275 18:28:15 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:00.275 * Looking for test storage... 00:08:00.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:00.275 18:28:15 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:00.275 18:28:15 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:00.275 18:28:15 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:00.275 18:28:15 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:00.275 18:28:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.276 18:28:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.276 --rc genhtml_branch_coverage=1 00:08:00.276 --rc genhtml_function_coverage=1 00:08:00.276 --rc genhtml_legend=1 00:08:00.276 --rc geninfo_all_blocks=1 00:08:00.276 --rc geninfo_unexecuted_blocks=1 00:08:00.276 00:08:00.276 ' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.276 --rc genhtml_branch_coverage=1 00:08:00.276 --rc genhtml_function_coverage=1 00:08:00.276 --rc genhtml_legend=1 00:08:00.276 --rc geninfo_all_blocks=1 00:08:00.276 --rc geninfo_unexecuted_blocks=1 00:08:00.276 00:08:00.276 ' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.276 --rc genhtml_branch_coverage=1 00:08:00.276 --rc genhtml_function_coverage=1 00:08:00.276 --rc genhtml_legend=1 00:08:00.276 --rc geninfo_all_blocks=1 00:08:00.276 --rc geninfo_unexecuted_blocks=1 00:08:00.276 00:08:00.276 ' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.276 --rc genhtml_branch_coverage=1 00:08:00.276 --rc genhtml_function_coverage=1 00:08:00.276 --rc genhtml_legend=1 00:08:00.276 --rc geninfo_all_blocks=1 00:08:00.276 --rc geninfo_unexecuted_blocks=1 00:08:00.276 00:08:00.276 ' 00:08:00.276 18:28:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:00.276 18:28:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74663 00:08:00.276 18:28:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74663 00:08:00.276 18:28:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 74663 ']' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.276 18:28:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.535 [2024-12-14 18:28:16.057807] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.535 [2024-12-14 18:28:16.057947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74663 ] 00:08:00.535 [2024-12-14 18:28:16.200034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.794 [2024-12-14 18:28:16.291249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.362 18:28:17 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.362 18:28:17 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:01.362 18:28:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:01.929 { 00:08:01.929 "fields": { 00:08:01.929 "commit": "b18e1bd62", 00:08:01.929 "major": 24, 00:08:01.929 "minor": 9, 00:08:01.929 "patch": 1, 00:08:01.929 "suffix": "-pre" 00:08:01.929 }, 00:08:01.929 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62" 00:08:01.929 } 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.929 18:28:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.929 18:28:17 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:02.187 2024/12/14 18:28:17 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:02.187 request: 00:08:02.187 { 00:08:02.187 "method": "env_dpdk_get_mem_stats", 00:08:02.187 "params": {} 00:08:02.187 } 00:08:02.187 Got JSON-RPC error response 00:08:02.187 GoRPCClient: error on JSON-RPC call 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.187 18:28:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74663 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 74663 ']' 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 74663 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74663 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.187 killing process with pid 74663 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74663' 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@969 -- # kill 74663 00:08:02.187 18:28:17 app_cmdline -- common/autotest_common.sh@974 -- # wait 74663 00:08:02.752 00:08:02.752 real 0m2.573s 00:08:02.752 user 0m3.014s 00:08:02.752 sys 0m0.735s 00:08:02.752 18:28:18 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.752 ************************************ 00:08:02.752 END TEST app_cmdline 00:08:02.752 18:28:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.752 ************************************ 00:08:02.752 18:28:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.752 18:28:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.752 18:28:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.752 18:28:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.752 ************************************ 00:08:02.752 START TEST version 00:08:02.752 ************************************ 00:08:02.752 18:28:18 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.752 * Looking for test storage... 00:08:02.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.752 18:28:18 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.752 18:28:18 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.752 18:28:18 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:03.010 18:28:18 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:03.010 18:28:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.010 18:28:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.010 18:28:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.010 18:28:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.010 18:28:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.010 18:28:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.010 18:28:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.010 18:28:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.010 18:28:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.010 18:28:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.010 18:28:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.010 18:28:18 version -- scripts/common.sh@344 -- # case "$op" in 00:08:03.010 18:28:18 version -- scripts/common.sh@345 -- # : 1 00:08:03.010 18:28:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.010 18:28:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.010 18:28:18 version -- scripts/common.sh@365 -- # decimal 1 00:08:03.010 18:28:18 version -- scripts/common.sh@353 -- # local d=1 00:08:03.010 18:28:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.010 18:28:18 version -- scripts/common.sh@355 -- # echo 1 00:08:03.010 18:28:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.010 18:28:18 version -- scripts/common.sh@366 -- # decimal 2 00:08:03.010 18:28:18 version -- scripts/common.sh@353 -- # local d=2 00:08:03.010 18:28:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.010 18:28:18 version -- scripts/common.sh@355 -- # echo 2 00:08:03.010 18:28:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.010 18:28:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.011 18:28:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.011 18:28:18 version -- scripts/common.sh@368 -- # return 0 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.011 --rc genhtml_branch_coverage=1 00:08:03.011 --rc genhtml_function_coverage=1 00:08:03.011 --rc genhtml_legend=1 00:08:03.011 --rc geninfo_all_blocks=1 00:08:03.011 --rc geninfo_unexecuted_blocks=1 00:08:03.011 00:08:03.011 ' 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.011 --rc genhtml_branch_coverage=1 00:08:03.011 --rc genhtml_function_coverage=1 00:08:03.011 --rc genhtml_legend=1 00:08:03.011 --rc geninfo_all_blocks=1 00:08:03.011 --rc geninfo_unexecuted_blocks=1 00:08:03.011 00:08:03.011 ' 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.011 --rc genhtml_branch_coverage=1 00:08:03.011 --rc genhtml_function_coverage=1 00:08:03.011 --rc genhtml_legend=1 00:08:03.011 --rc geninfo_all_blocks=1 00:08:03.011 --rc geninfo_unexecuted_blocks=1 00:08:03.011 00:08:03.011 ' 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.011 --rc genhtml_branch_coverage=1 00:08:03.011 --rc genhtml_function_coverage=1 00:08:03.011 --rc genhtml_legend=1 00:08:03.011 --rc geninfo_all_blocks=1 00:08:03.011 --rc geninfo_unexecuted_blocks=1 00:08:03.011 00:08:03.011 ' 00:08:03.011 18:28:18 version -- app/version.sh@17 -- # get_header_version major 00:08:03.011 18:28:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # cut -f2 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:03.011 18:28:18 version -- app/version.sh@17 -- # major=24 00:08:03.011 18:28:18 version -- app/version.sh@18 -- # get_header_version minor 00:08:03.011 18:28:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # cut -f2 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:03.011 18:28:18 version -- app/version.sh@18 -- # minor=9 00:08:03.011 18:28:18 version -- app/version.sh@19 -- # get_header_version patch 00:08:03.011 18:28:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # cut -f2 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:03.011 18:28:18 version -- app/version.sh@19 -- # patch=1 00:08:03.011 18:28:18 version -- app/version.sh@20 -- # get_header_version suffix 00:08:03.011 18:28:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # cut -f2 00:08:03.011 18:28:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:03.011 18:28:18 version -- app/version.sh@20 -- # suffix=-pre 00:08:03.011 18:28:18 version -- app/version.sh@22 -- # version=24.9 00:08:03.011 18:28:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:03.011 18:28:18 version -- app/version.sh@25 -- # version=24.9.1 00:08:03.011 18:28:18 version -- app/version.sh@28 -- # version=24.9.1rc0 00:08:03.011 18:28:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:03.011 18:28:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:03.011 18:28:18 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:08:03.011 18:28:18 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:08:03.011 00:08:03.011 real 0m0.261s 00:08:03.011 user 0m0.171s 00:08:03.011 sys 0m0.129s 00:08:03.011 18:28:18 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.011 18:28:18 version -- common/autotest_common.sh@10 -- # set +x 00:08:03.011 ************************************ 00:08:03.011 END TEST version 00:08:03.011 ************************************ 00:08:03.011 18:28:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:03.011 18:28:18 -- spdk/autotest.sh@194 -- # uname -s 00:08:03.011 18:28:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:03.011 18:28:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:03.011 18:28:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:03.011 18:28:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:03.011 18:28:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.011 18:28:18 -- common/autotest_common.sh@10 -- # set +x 00:08:03.011 18:28:18 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:03.011 18:28:18 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:03.011 18:28:18 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:03.011 18:28:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.011 18:28:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.011 18:28:18 -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 ************************************ 00:08:03.270 START TEST nvmf_tcp 00:08:03.270 ************************************ 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:03.270 * Looking for test storage... 00:08:03.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.270 18:28:18 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 18:28:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:03.270 18:28:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:03.270 18:28:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.270 18:28:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 ************************************ 00:08:03.270 START TEST nvmf_target_core 00:08:03.270 ************************************ 00:08:03.271 18:28:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:03.530 * Looking for test storage... 00:08:03.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.530 --rc genhtml_branch_coverage=1 00:08:03.530 --rc genhtml_function_coverage=1 00:08:03.530 --rc genhtml_legend=1 00:08:03.530 --rc geninfo_all_blocks=1 00:08:03.530 --rc geninfo_unexecuted_blocks=1 00:08:03.530 00:08:03.530 ' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.530 --rc genhtml_branch_coverage=1 00:08:03.530 --rc genhtml_function_coverage=1 00:08:03.530 --rc genhtml_legend=1 00:08:03.530 --rc geninfo_all_blocks=1 00:08:03.530 --rc geninfo_unexecuted_blocks=1 00:08:03.530 00:08:03.530 ' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.530 --rc genhtml_branch_coverage=1 00:08:03.530 --rc genhtml_function_coverage=1 00:08:03.530 --rc genhtml_legend=1 00:08:03.530 --rc geninfo_all_blocks=1 00:08:03.530 --rc geninfo_unexecuted_blocks=1 00:08:03.530 00:08:03.530 ' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.530 --rc genhtml_branch_coverage=1 00:08:03.530 --rc genhtml_function_coverage=1 00:08:03.530 --rc genhtml_legend=1 00:08:03.530 --rc geninfo_all_blocks=1 00:08:03.530 --rc geninfo_unexecuted_blocks=1 00:08:03.530 00:08:03.530 ' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.530 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.531 ************************************ 00:08:03.531 START TEST nvmf_abort 00:08:03.531 ************************************ 00:08:03.531 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:03.531 * Looking for test storage... 00:08:03.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.790 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.791 --rc genhtml_branch_coverage=1 00:08:03.791 --rc genhtml_function_coverage=1 00:08:03.791 --rc genhtml_legend=1 00:08:03.791 --rc geninfo_all_blocks=1 00:08:03.791 --rc geninfo_unexecuted_blocks=1 00:08:03.791 00:08:03.791 ' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.791 --rc genhtml_branch_coverage=1 00:08:03.791 --rc genhtml_function_coverage=1 00:08:03.791 --rc genhtml_legend=1 00:08:03.791 --rc geninfo_all_blocks=1 00:08:03.791 --rc geninfo_unexecuted_blocks=1 00:08:03.791 00:08:03.791 ' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.791 --rc genhtml_branch_coverage=1 00:08:03.791 --rc genhtml_function_coverage=1 00:08:03.791 --rc genhtml_legend=1 00:08:03.791 --rc geninfo_all_blocks=1 00:08:03.791 --rc geninfo_unexecuted_blocks=1 00:08:03.791 00:08:03.791 ' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.791 --rc genhtml_branch_coverage=1 00:08:03.791 --rc genhtml_function_coverage=1 00:08:03.791 --rc genhtml_legend=1 00:08:03.791 --rc geninfo_all_blocks=1 00:08:03.791 --rc geninfo_unexecuted_blocks=1 00:08:03.791 00:08:03.791 ' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.791 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:03.791 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:03.792 Cannot find device "nvmf_init_br" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:03.792 Cannot find device "nvmf_init_br2" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:03.792 Cannot find device "nvmf_tgt_br" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.792 Cannot find device "nvmf_tgt_br2" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:03.792 Cannot find device "nvmf_init_br" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:03.792 Cannot find device "nvmf_init_br2" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:03.792 Cannot find device "nvmf_tgt_br" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:03.792 Cannot find device "nvmf_tgt_br2" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:03.792 Cannot find device "nvmf_br" 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:08:03.792 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:03.792 Cannot find device "nvmf_init_if" 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:04.050 Cannot find device "nvmf_init_if2" 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:04.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:04.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:04.050 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:04.051 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:04.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:04.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:08:04.309 00:08:04.309 --- 10.0.0.3 ping statistics --- 00:08:04.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.309 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:04.309 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:04.309 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:04.309 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:04.309 00:08:04.309 --- 10.0.0.4 ping statistics --- 00:08:04.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.309 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:04.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:04.310 00:08:04.310 --- 10.0.0.1 ping statistics --- 00:08:04.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.310 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:04.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:04.310 00:08:04.310 --- 10.0.0.2 ping statistics --- 00:08:04.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.310 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # return 0 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=75103 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 75103 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 75103 ']' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.310 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 [2024-12-14 18:28:20.039583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.310 [2024-12-14 18:28:20.039777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.572 [2024-12-14 18:28:20.186652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.572 [2024-12-14 18:28:20.282214] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.572 [2024-12-14 18:28:20.282274] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.572 [2024-12-14 18:28:20.282288] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.572 [2024-12-14 18:28:20.282299] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.572 [2024-12-14 18:28:20.282308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.572 [2024-12-14 18:28:20.282523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.572 [2024-12-14 18:28:20.283323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.572 [2024-12-14 18:28:20.283351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 [2024-12-14 18:28:21.145550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 Malloc0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 Delay0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 [2024-12-14 18:28:21.239224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.507 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:05.765 [2024-12-14 18:28:21.425283] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:08.294 Initializing NVMe Controllers 00:08:08.294 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:08.294 controller IO queue size 128 less than required 00:08:08.294 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:08.294 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:08.294 Initialization complete. Launching workers. 00:08:08.294 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29748 00:08:08.294 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29809, failed to submit 62 00:08:08.294 success 29752, unsuccessful 57, failed 0 00:08:08.294 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.294 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.294 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:08.294 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.295 rmmod nvme_tcp 00:08:08.295 rmmod nvme_fabrics 00:08:08.295 rmmod nvme_keyring 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 75103 ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 75103 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 75103 ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 75103 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75103 00:08:08.295 killing process with pid 75103 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75103' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 75103 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 75103 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.295 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:08.295 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:08.295 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:08.295 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:08.295 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:08:08.553 00:08:08.553 real 0m4.974s 00:08:08.553 user 0m12.968s 00:08:08.553 sys 0m1.228s 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:08.553 ************************************ 00:08:08.553 END TEST nvmf_abort 00:08:08.553 ************************************ 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.553 18:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.554 18:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.554 ************************************ 00:08:08.554 START TEST nvmf_ns_hotplug_stress 00:08:08.554 ************************************ 00:08:08.554 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:08.813 * Looking for test storage... 00:08:08.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.813 --rc genhtml_branch_coverage=1 00:08:08.813 --rc genhtml_function_coverage=1 00:08:08.813 --rc genhtml_legend=1 00:08:08.813 --rc geninfo_all_blocks=1 00:08:08.813 --rc geninfo_unexecuted_blocks=1 00:08:08.813 00:08:08.813 ' 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.813 --rc genhtml_branch_coverage=1 00:08:08.813 --rc genhtml_function_coverage=1 00:08:08.813 --rc genhtml_legend=1 00:08:08.813 --rc geninfo_all_blocks=1 00:08:08.813 --rc geninfo_unexecuted_blocks=1 00:08:08.813 00:08:08.813 ' 00:08:08.813 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.813 --rc genhtml_branch_coverage=1 00:08:08.813 --rc genhtml_function_coverage=1 00:08:08.814 --rc genhtml_legend=1 00:08:08.814 --rc geninfo_all_blocks=1 00:08:08.814 --rc geninfo_unexecuted_blocks=1 00:08:08.814 00:08:08.814 ' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:08.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.814 --rc genhtml_branch_coverage=1 00:08:08.814 --rc genhtml_function_coverage=1 00:08:08.814 --rc genhtml_legend=1 00:08:08.814 --rc geninfo_all_blocks=1 00:08:08.814 --rc geninfo_unexecuted_blocks=1 00:08:08.814 00:08:08.814 ' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:08.814 Cannot find device "nvmf_init_br" 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:08.814 Cannot find device "nvmf_init_br2" 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:08.814 Cannot find device "nvmf_tgt_br" 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.814 Cannot find device "nvmf_tgt_br2" 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:08.814 Cannot find device "nvmf_init_br" 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:08:08.814 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:08.814 Cannot find device "nvmf_init_br2" 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:08.815 Cannot find device "nvmf_tgt_br" 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:08.815 Cannot find device "nvmf_tgt_br2" 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:08:08.815 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:08.815 Cannot find device "nvmf_br" 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:09.073 Cannot find device "nvmf_init_if" 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:09.073 Cannot find device "nvmf_init_if2" 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:08:09.073 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:09.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:08:09.074 00:08:09.074 --- 10.0.0.3 ping statistics --- 00:08:09.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.074 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:09.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:09.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:09.074 00:08:09.074 --- 10.0.0.4 ping statistics --- 00:08:09.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.074 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:09.074 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:09.333 00:08:09.333 --- 10.0.0.1 ping statistics --- 00:08:09.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.333 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:09.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:09.333 00:08:09.333 --- 10.0.0.2 ping statistics --- 00:08:09.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.333 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # return 0 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=75435 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 75435 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 75435 ']' 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.333 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 [2024-12-14 18:28:24.940303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.333 [2024-12-14 18:28:24.940406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.591 [2024-12-14 18:28:25.087575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.591 [2024-12-14 18:28:25.183034] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.591 [2024-12-14 18:28:25.183125] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.591 [2024-12-14 18:28:25.183140] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.591 [2024-12-14 18:28:25.183150] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.591 [2024-12-14 18:28:25.183160] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.591 [2024-12-14 18:28:25.183321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.591 [2024-12-14 18:28:25.184514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.591 [2024-12-14 18:28:25.184564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:10.525 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.525 [2024-12-14 18:28:26.262196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.784 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.042 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:11.301 [2024-12-14 18:28:26.832941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:11.301 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:11.559 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:11.818 Malloc0 00:08:11.818 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:12.076 Delay0 00:08:12.076 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.335 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:12.593 NULL1 00:08:12.593 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:12.851 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=75571 00:08:12.851 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:12.851 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:12.851 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.231 Read completed with error (sct=0, sc=11) 00:08:14.231 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.495 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:14.495 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:14.753 true 00:08:14.753 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:14.753 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.689 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.689 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:15.689 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:15.947 true 00:08:15.947 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:15.947 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.514 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.772 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:16.772 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:17.030 true 00:08:17.030 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:17.030 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.289 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.547 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:17.547 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:17.805 true 00:08:17.805 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:17.805 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.740 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.740 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:18.740 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:18.998 true 00:08:18.998 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:18.998 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.256 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.820 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:19.820 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:19.820 true 00:08:20.078 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:20.078 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.336 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.593 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:20.593 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:20.593 true 00:08:20.851 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:20.851 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.786 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.786 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:21.786 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:22.044 true 00:08:22.044 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:22.044 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.302 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.561 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:22.561 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:22.819 true 00:08:22.819 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:22.819 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.077 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.335 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:23.335 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:23.594 true 00:08:23.594 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:23.594 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.528 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.786 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:24.786 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:25.045 true 00:08:25.045 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:25.045 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.303 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.560 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:25.560 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:25.818 true 00:08:25.818 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:25.818 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.751 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.751 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:26.751 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:27.010 true 00:08:27.010 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:27.010 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.268 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.526 18:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:27.526 18:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:27.785 true 00:08:27.785 18:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:27.785 18:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.720 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.978 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:28.978 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:29.236 true 00:08:29.236 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:29.236 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.494 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.751 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:29.751 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:30.009 true 00:08:30.009 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:30.009 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.267 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.529 18:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:30.529 18:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:30.796 true 00:08:30.796 18:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:30.796 18:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.732 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.990 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:31.990 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:31.990 true 00:08:32.249 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:32.249 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.507 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.765 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:32.765 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:32.765 true 00:08:33.023 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:33.023 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.023 18:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.589 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:33.589 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:33.848 true 00:08:33.848 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:33.848 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.782 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.040 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:35.040 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:35.298 true 00:08:35.298 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:35.298 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.556 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.814 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:35.814 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:36.072 true 00:08:36.073 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:36.073 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.331 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.589 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:36.589 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:36.848 true 00:08:36.848 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:36.848 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.415 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.415 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:37.415 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:37.982 true 00:08:37.982 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:37.982 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.549 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.116 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:39.116 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:39.375 true 00:08:39.375 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:39.375 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.633 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.890 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:39.890 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:40.149 true 00:08:40.149 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:40.149 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.407 18:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.666 18:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:40.666 18:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:40.925 true 00:08:40.925 18:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:40.925 18:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.861 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.861 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:41.861 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:42.428 true 00:08:42.428 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:42.428 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.428 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.686 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:42.686 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:42.944 true 00:08:42.944 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:42.944 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.203 Initializing NVMe Controllers 00:08:43.203 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.203 Controller IO queue size 128, less than required. 00:08:43.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.203 Controller IO queue size 128, less than required. 00:08:43.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.203 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:43.203 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:43.203 Initialization complete. Launching workers. 00:08:43.203 ======================================================== 00:08:43.203 Latency(us) 00:08:43.203 Device Information : IOPS MiB/s Average min max 00:08:43.203 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 328.23 0.16 156406.91 4138.68 1024361.25 00:08:43.203 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8646.67 4.22 14804.88 3420.10 561115.67 00:08:43.203 ======================================================== 00:08:43.203 Total : 8974.90 4.38 19983.55 3420.10 1024361.25 00:08:43.203 00:08:43.203 18:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.462 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:43.462 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:43.720 true 00:08:43.720 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75571 00:08:43.720 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (75571) - No such process 00:08:43.720 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 75571 00:08:43.720 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.979 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.238 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:44.238 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:44.238 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:44.238 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.238 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:44.496 null0 00:08:44.496 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.496 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.496 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:44.754 null1 00:08:44.754 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.754 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.754 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:45.013 null2 00:08:45.013 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.013 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.013 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:45.271 null3 00:08:45.271 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.271 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.271 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:45.530 null4 00:08:45.530 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.530 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.530 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:46.110 null5 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:46.110 null6 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:46.110 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:46.386 null7 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:46.644 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.645 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 76633 76635 76636 76638 76640 76641 76643 76646 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.904 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.163 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.164 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.423 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.423 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.423 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.423 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.423 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.423 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.423 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.423 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.423 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.682 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.941 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.200 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.458 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.458 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.716 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.975 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.234 18:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.493 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:49.751 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.751 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.752 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.010 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:50.269 18:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.528 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.786 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.787 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.045 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.304 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.562 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.820 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.079 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.337 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.337 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.337 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.337 18:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.337 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.337 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.596 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.596 rmmod nvme_tcp 00:08:52.596 rmmod nvme_fabrics 00:08:52.854 rmmod nvme_keyring 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 75435 ']' 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 75435 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 75435 ']' 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 75435 00:08:52.854 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75435 00:08:52.855 killing process with pid 75435 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75435' 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 75435 00:08:52.855 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 75435 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:53.113 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:08:53.371 00:08:53.371 real 0m44.748s 00:08:53.371 user 3m37.626s 00:08:53.371 sys 0m13.393s 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.371 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.371 ************************************ 00:08:53.371 END TEST nvmf_ns_hotplug_stress 00:08:53.371 ************************************ 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.371 ************************************ 00:08:53.371 START TEST nvmf_delete_subsystem 00:08:53.371 ************************************ 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.371 * Looking for test storage... 00:08:53.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.371 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.630 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.630 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.630 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.630 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.631 --rc genhtml_branch_coverage=1 00:08:53.631 --rc genhtml_function_coverage=1 00:08:53.631 --rc genhtml_legend=1 00:08:53.631 --rc geninfo_all_blocks=1 00:08:53.631 --rc geninfo_unexecuted_blocks=1 00:08:53.631 00:08:53.631 ' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.631 --rc genhtml_branch_coverage=1 00:08:53.631 --rc genhtml_function_coverage=1 00:08:53.631 --rc genhtml_legend=1 00:08:53.631 --rc geninfo_all_blocks=1 00:08:53.631 --rc geninfo_unexecuted_blocks=1 00:08:53.631 00:08:53.631 ' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.631 --rc genhtml_branch_coverage=1 00:08:53.631 --rc genhtml_function_coverage=1 00:08:53.631 --rc genhtml_legend=1 00:08:53.631 --rc geninfo_all_blocks=1 00:08:53.631 --rc geninfo_unexecuted_blocks=1 00:08:53.631 00:08:53.631 ' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.631 --rc genhtml_branch_coverage=1 00:08:53.631 --rc genhtml_function_coverage=1 00:08:53.631 --rc genhtml_legend=1 00:08:53.631 --rc geninfo_all_blocks=1 00:08:53.631 --rc geninfo_unexecuted_blocks=1 00:08:53.631 00:08:53.631 ' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.631 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.631 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:53.632 Cannot find device "nvmf_init_br" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:53.632 Cannot find device "nvmf_init_br2" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:53.632 Cannot find device "nvmf_tgt_br" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.632 Cannot find device "nvmf_tgt_br2" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:53.632 Cannot find device "nvmf_init_br" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:53.632 Cannot find device "nvmf_init_br2" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:53.632 Cannot find device "nvmf_tgt_br" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:53.632 Cannot find device "nvmf_tgt_br2" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:53.632 Cannot find device "nvmf_br" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:53.632 Cannot find device "nvmf_init_if" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:53.632 Cannot find device "nvmf_init_if2" 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.632 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:53.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:08:53.891 00:08:53.891 --- 10.0.0.3 ping statistics --- 00:08:53.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.891 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:53.891 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:53.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:53.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:08:53.891 00:08:53.891 --- 10.0.0.4 ping statistics --- 00:08:53.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.891 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:54.149 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:54.149 00:08:54.149 --- 10.0.0.1 ping statistics --- 00:08:54.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.149 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:54.149 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:54.149 00:08:54.149 --- 10.0.0.2 ping statistics --- 00:08:54.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.149 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # return 0 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=78014 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 78014 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 78014 ']' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.150 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.150 [2024-12-14 18:29:09.753670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.150 [2024-12-14 18:29:09.753799] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.408 [2024-12-14 18:29:09.899175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.408 [2024-12-14 18:29:09.987292] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.408 [2024-12-14 18:29:09.987769] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.408 [2024-12-14 18:29:09.988190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.408 [2024-12-14 18:29:09.988368] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.408 [2024-12-14 18:29:09.988487] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.408 [2024-12-14 18:29:09.988691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.408 [2024-12-14 18:29:09.988705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 [2024-12-14 18:29:10.901212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 [2024-12-14 18:29:10.917722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 NULL1 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 Delay0 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=78065 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:55.343 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:55.602 [2024-12-14 18:29:11.122052] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.502 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.502 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 Read completed with error (sct=0, sc=8) 00:08:57.502 Read completed with error (sct=0, sc=8) 00:08:57.502 starting I/O failed: -6 00:08:57.502 Write completed with error (sct=0, sc=8) 00:08:57.502 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 [2024-12-14 18:29:13.158020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8fd70 is same with the state(6) to be set 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 starting I/O failed: -6 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 [2024-12-14 18:29:13.161207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f818c000c00 is same with the state(6) to be set 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Read completed with error (sct=0, sc=8) 00:08:57.503 Write completed with error (sct=0, sc=8) 00:08:58.438 [2024-12-14 18:29:14.135127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8fb90 is same with the state(6) to be set 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 [2024-12-14 18:29:14.159177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ff50 is same with the state(6) to be set 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 [2024-12-14 18:29:14.160483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f818c00cfe0 is same with the state(6) to be set 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 [2024-12-14 18:29:14.161059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f818c00d7c0 is same with the state(6) to be set 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Write completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 Read completed with error (sct=0, sc=8) 00:08:58.438 [2024-12-14 18:29:14.162157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe73530 is same with the state(6) to be set 00:08:58.438 Initializing NVMe Controllers 00:08:58.438 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.438 Controller IO queue size 128, less than required. 00:08:58.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.438 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:58.438 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:58.438 Initialization complete. Launching workers. 00:08:58.438 ======================================================== 00:08:58.438 Latency(us) 00:08:58.438 Device Information : IOPS MiB/s Average min max 00:08:58.438 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.30 0.08 892494.70 412.89 1044388.29 00:08:58.438 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.46 0.07 993383.26 330.42 2003086.46 00:08:58.438 ======================================================== 00:08:58.438 Total : 318.76 0.16 939167.44 330.42 2003086.46 00:08:58.438 00:08:58.438 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.439 [2024-12-14 18:29:14.162574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8fb90 (9): Bad file descriptor 00:08:58.439 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:58.439 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:58.439 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 78065 00:08:58.439 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 78065 00:08:59.005 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (78065) - No such process 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 78065 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 78065 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 78065 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.005 [2024-12-14 18:29:14.685596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=78105 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:08:59.005 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.263 [2024-12-14 18:29:14.860245] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:59.525 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.525 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:08:59.525 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.094 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.095 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:00.095 18:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.662 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.662 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:00.662 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.232 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.232 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:01.232 18:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.511 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.511 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:01.511 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.091 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.091 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:02.091 18:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.349 Initializing NVMe Controllers 00:09:02.349 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.349 Controller IO queue size 128, less than required. 00:09:02.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.349 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:02.349 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:02.349 Initialization complete. Launching workers. 00:09:02.349 ======================================================== 00:09:02.349 Latency(us) 00:09:02.349 Device Information : IOPS MiB/s Average min max 00:09:02.349 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003075.52 1000124.79 1041814.72 00:09:02.349 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005155.08 1000147.60 1042502.99 00:09:02.349 ======================================================== 00:09:02.349 Total : 256.00 0.12 1004115.30 1000124.79 1042502.99 00:09:02.349 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78105 00:09:02.608 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (78105) - No such process 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 78105 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.608 rmmod nvme_tcp 00:09:02.608 rmmod nvme_fabrics 00:09:02.608 rmmod nvme_keyring 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 78014 ']' 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 78014 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 78014 ']' 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 78014 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78014 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.608 killing process with pid 78014 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78014' 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 78014 00:09:02.608 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 78014 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:02.867 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:09:03.126 ************************************ 00:09:03.126 END TEST nvmf_delete_subsystem 00:09:03.126 ************************************ 00:09:03.126 00:09:03.126 real 0m9.819s 00:09:03.126 user 0m29.251s 00:09:03.126 sys 0m1.680s 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.126 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.385 ************************************ 00:09:03.385 START TEST nvmf_host_management 00:09:03.385 ************************************ 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.385 * Looking for test storage... 00:09:03.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.385 18:29:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:03.385 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.386 --rc genhtml_branch_coverage=1 00:09:03.386 --rc genhtml_function_coverage=1 00:09:03.386 --rc genhtml_legend=1 00:09:03.386 --rc geninfo_all_blocks=1 00:09:03.386 --rc geninfo_unexecuted_blocks=1 00:09:03.386 00:09:03.386 ' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.386 --rc genhtml_branch_coverage=1 00:09:03.386 --rc genhtml_function_coverage=1 00:09:03.386 --rc genhtml_legend=1 00:09:03.386 --rc geninfo_all_blocks=1 00:09:03.386 --rc geninfo_unexecuted_blocks=1 00:09:03.386 00:09:03.386 ' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.386 --rc genhtml_branch_coverage=1 00:09:03.386 --rc genhtml_function_coverage=1 00:09:03.386 --rc genhtml_legend=1 00:09:03.386 --rc geninfo_all_blocks=1 00:09:03.386 --rc geninfo_unexecuted_blocks=1 00:09:03.386 00:09:03.386 ' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.386 --rc genhtml_branch_coverage=1 00:09:03.386 --rc genhtml_function_coverage=1 00:09:03.386 --rc genhtml_legend=1 00:09:03.386 --rc geninfo_all_blocks=1 00:09:03.386 --rc geninfo_unexecuted_blocks=1 00:09:03.386 00:09:03.386 ' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.386 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.387 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.387 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.387 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.387 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.387 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:03.645 Cannot find device "nvmf_init_br" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:03.645 Cannot find device "nvmf_init_br2" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:03.645 Cannot find device "nvmf_tgt_br" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.645 Cannot find device "nvmf_tgt_br2" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:03.645 Cannot find device "nvmf_init_br" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:03.645 Cannot find device "nvmf_init_br2" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:03.645 Cannot find device "nvmf_tgt_br" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:03.645 Cannot find device "nvmf_tgt_br2" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:03.645 Cannot find device "nvmf_br" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:03.645 Cannot find device "nvmf_init_if" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:03.645 Cannot find device "nvmf_init_if2" 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:03.645 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:03.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:09:03.904 00:09:03.904 --- 10.0.0.3 ping statistics --- 00:09:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.904 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:03.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:03.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:09:03.904 00:09:03.904 --- 10.0.0.4 ping statistics --- 00:09:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.904 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:03.904 00:09:03.904 --- 10.0.0.1 ping statistics --- 00:09:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.904 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:03.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:03.904 00:09:03.904 --- 10.0.0.2 ping statistics --- 00:09:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.904 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=78405 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 78405 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 78405 ']' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.904 18:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.904 [2024-12-14 18:29:19.603980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:03.905 [2024-12-14 18:29:19.604106] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.163 [2024-12-14 18:29:19.739254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.163 [2024-12-14 18:29:19.825437] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.163 [2024-12-14 18:29:19.825513] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.163 [2024-12-14 18:29:19.825540] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.163 [2024-12-14 18:29:19.825548] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.163 [2024-12-14 18:29:19.825554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.163 [2024-12-14 18:29:19.825703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.163 [2024-12-14 18:29:19.825867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.163 [2024-12-14 18:29:19.825999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:04.163 [2024-12-14 18:29:19.826006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 [2024-12-14 18:29:20.634732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 Malloc0 00:09:05.097 [2024-12-14 18:29:20.710322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=78477 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 78477 /var/tmp/bdevperf.sock 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 78477 ']' 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:05.097 { 00:09:05.097 "params": { 00:09:05.097 "name": "Nvme$subsystem", 00:09:05.097 "trtype": "$TEST_TRANSPORT", 00:09:05.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.097 "adrfam": "ipv4", 00:09:05.097 "trsvcid": "$NVMF_PORT", 00:09:05.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.097 "hdgst": ${hdgst:-false}, 00:09:05.097 "ddgst": ${ddgst:-false} 00:09:05.097 }, 00:09:05.097 "method": "bdev_nvme_attach_controller" 00:09:05.097 } 00:09:05.097 EOF 00:09:05.097 )") 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:05.097 18:29:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:05.097 "params": { 00:09:05.097 "name": "Nvme0", 00:09:05.097 "trtype": "tcp", 00:09:05.097 "traddr": "10.0.0.3", 00:09:05.097 "adrfam": "ipv4", 00:09:05.097 "trsvcid": "4420", 00:09:05.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:05.097 "hdgst": false, 00:09:05.097 "ddgst": false 00:09:05.097 }, 00:09:05.097 "method": "bdev_nvme_attach_controller" 00:09:05.097 }' 00:09:05.097 [2024-12-14 18:29:20.837977] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:05.097 [2024-12-14 18:29:20.838097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78477 ] 00:09:05.356 [2024-12-14 18:29:20.981294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.356 [2024-12-14 18:29:21.075353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.615 Running I/O for 10 seconds... 00:09:05.615 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.615 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:05.615 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:05.615 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.615 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:05.874 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.135 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.135 [2024-12-14 18:29:21.749885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.749953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.749977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.749988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.750012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.750033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.750088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.135 [2024-12-14 18:29:21.750122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-12-14 18:29:21.750132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.750982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.750992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.136 [2024-12-14 18:29:21.751002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.136 [2024-12-14 18:29:21.751020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.137 [2024-12-14 18:29:21.751349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272d060 is same with the state(6) to be set 00:09:06.137 [2024-12-14 18:29:21.751453] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x272d060 was disconnected and freed. reset controller. 00:09:06.137 [2024-12-14 18:29:21.751557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.137 [2024-12-14 18:29:21.751574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.137 [2024-12-14 18:29:21.751594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.137 [2024-12-14 18:29:21.751630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.137 [2024-12-14 18:29:21.751649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.137 [2024-12-14 18:29:21.751657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24bb530 is same with the state(6) to be set 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.137 [2024-12-14 18:29:21.752809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.137 task offset: 89984 on job bdev=Nvme0n1 fails 00:09:06.137 00:09:06.137 Latency(us) 00:09:06.137 [2024-12-14T18:29:21.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.137 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.137 Job: Nvme0n1 ended in about 0.44 seconds with error 00:09:06.137 Verification LBA range: start 0x0 length 0x400 00:09:06.137 Nvme0n1 : 0.44 1442.03 90.13 144.20 0.00 38968.62 6762.12 37891.72 00:09:06.137 [2024-12-14T18:29:21.890Z] =================================================================================================================== 00:09:06.137 [2024-12-14T18:29:21.890Z] Total : 1442.03 90.13 144.20 0.00 38968.62 6762.12 37891.72 00:09:06.137 [2024-12-14 18:29:21.754762] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.137 [2024-12-14 18:29:21.754791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bb530 (9): Bad file descriptor 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.137 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:06.137 [2024-12-14 18:29:21.763458] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 78477 00:09:07.073 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78477) - No such process 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:07.073 { 00:09:07.073 "params": { 00:09:07.073 "name": "Nvme$subsystem", 00:09:07.073 "trtype": "$TEST_TRANSPORT", 00:09:07.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.073 "adrfam": "ipv4", 00:09:07.073 "trsvcid": "$NVMF_PORT", 00:09:07.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.073 "hdgst": ${hdgst:-false}, 00:09:07.073 "ddgst": ${ddgst:-false} 00:09:07.073 }, 00:09:07.073 "method": "bdev_nvme_attach_controller" 00:09:07.073 } 00:09:07.073 EOF 00:09:07.073 )") 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:07.073 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:07.073 "params": { 00:09:07.073 "name": "Nvme0", 00:09:07.073 "trtype": "tcp", 00:09:07.073 "traddr": "10.0.0.3", 00:09:07.073 "adrfam": "ipv4", 00:09:07.073 "trsvcid": "4420", 00:09:07.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.074 "hdgst": false, 00:09:07.074 "ddgst": false 00:09:07.074 }, 00:09:07.074 "method": "bdev_nvme_attach_controller" 00:09:07.074 }' 00:09:07.074 [2024-12-14 18:29:22.822711] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:07.074 [2024-12-14 18:29:22.822806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78523 ] 00:09:07.332 [2024-12-14 18:29:22.958430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.332 [2024-12-14 18:29:23.042551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.590 Running I/O for 1 seconds... 00:09:08.785 1536.00 IOPS, 96.00 MiB/s 00:09:08.785 Latency(us) 00:09:08.785 [2024-12-14T18:29:24.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.785 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.785 Verification LBA range: start 0x0 length 0x400 00:09:08.785 Nvme0n1 : 1.01 1580.84 98.80 0.00 0.00 39721.10 7119.59 35031.97 00:09:08.785 [2024-12-14T18:29:24.538Z] =================================================================================================================== 00:09:08.785 [2024-12-14T18:29:24.538Z] Total : 1580.84 98.80 0.00 0.00 39721.10 7119.59 35031.97 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.043 rmmod nvme_tcp 00:09:09.043 rmmod nvme_fabrics 00:09:09.043 rmmod nvme_keyring 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 78405 ']' 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 78405 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 78405 ']' 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 78405 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78405 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:09.043 killing process with pid 78405 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78405' 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 78405 00:09:09.043 18:29:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 78405 00:09:09.302 [2024-12-14 18:29:25.043180] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.561 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:09.820 00:09:09.820 real 0m6.440s 00:09:09.820 user 0m23.460s 00:09:09.820 sys 0m1.774s 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.820 ************************************ 00:09:09.820 END TEST nvmf_host_management 00:09:09.820 ************************************ 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.820 ************************************ 00:09:09.820 START TEST nvmf_lvol 00:09:09.820 ************************************ 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:09.820 * Looking for test storage... 00:09:09.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.820 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.080 --rc genhtml_branch_coverage=1 00:09:10.080 --rc genhtml_function_coverage=1 00:09:10.080 --rc genhtml_legend=1 00:09:10.080 --rc geninfo_all_blocks=1 00:09:10.080 --rc geninfo_unexecuted_blocks=1 00:09:10.080 00:09:10.080 ' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.080 --rc genhtml_branch_coverage=1 00:09:10.080 --rc genhtml_function_coverage=1 00:09:10.080 --rc genhtml_legend=1 00:09:10.080 --rc geninfo_all_blocks=1 00:09:10.080 --rc geninfo_unexecuted_blocks=1 00:09:10.080 00:09:10.080 ' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.080 --rc genhtml_branch_coverage=1 00:09:10.080 --rc genhtml_function_coverage=1 00:09:10.080 --rc genhtml_legend=1 00:09:10.080 --rc geninfo_all_blocks=1 00:09:10.080 --rc geninfo_unexecuted_blocks=1 00:09:10.080 00:09:10.080 ' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.080 --rc genhtml_branch_coverage=1 00:09:10.080 --rc genhtml_function_coverage=1 00:09:10.080 --rc genhtml_legend=1 00:09:10.080 --rc geninfo_all_blocks=1 00:09:10.080 --rc geninfo_unexecuted_blocks=1 00:09:10.080 00:09:10.080 ' 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.080 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.081 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:10.081 Cannot find device "nvmf_init_br" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:10.081 Cannot find device "nvmf_init_br2" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:10.081 Cannot find device "nvmf_tgt_br" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.081 Cannot find device "nvmf_tgt_br2" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:10.081 Cannot find device "nvmf_init_br" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:10.081 Cannot find device "nvmf_init_br2" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:10.081 Cannot find device "nvmf_tgt_br" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.081 Cannot find device "nvmf_tgt_br2" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.081 Cannot find device "nvmf_br" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.081 Cannot find device "nvmf_init_if" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.081 Cannot find device "nvmf_init_if2" 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:10.081 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.082 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.340 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:09:10.341 00:09:10.341 --- 10.0.0.3 ping statistics --- 00:09:10.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.341 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:09:10.341 00:09:10.341 --- 10.0.0.4 ping statistics --- 00:09:10.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.341 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:09:10.341 00:09:10.341 --- 10.0.0.1 ping statistics --- 00:09:10.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.341 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:10.341 00:09:10.341 --- 10.0.0.2 ping statistics --- 00:09:10.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.341 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:10.341 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=78789 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 78789 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 78789 ']' 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.341 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.341 [2024-12-14 18:29:26.069130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:10.341 [2024-12-14 18:29:26.069218] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.600 [2024-12-14 18:29:26.206230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.600 [2024-12-14 18:29:26.298107] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.600 [2024-12-14 18:29:26.298223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.600 [2024-12-14 18:29:26.298250] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.600 [2024-12-14 18:29:26.298261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.600 [2024-12-14 18:29:26.298271] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.600 [2024-12-14 18:29:26.298415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.600 [2024-12-14 18:29:26.298864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.600 [2024-12-14 18:29:26.298870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.858 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.116 [2024-12-14 18:29:26.813877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.116 18:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.683 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:11.683 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.941 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:11.941 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:12.199 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:12.456 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2133798a-d9ad-4cd0-b884-e8962bf8c615 00:09:12.456 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2133798a-d9ad-4cd0-b884-e8962bf8c615 lvol 20 00:09:12.713 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c2057783-f2a6-4adc-940a-02bcf03b59ef 00:09:12.713 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.972 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2057783-f2a6-4adc-940a-02bcf03b59ef 00:09:13.230 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:13.489 [2024-12-14 18:29:28.990078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.489 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:13.747 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:13.747 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=78929 00:09:13.747 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:14.682 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c2057783-f2a6-4adc-940a-02bcf03b59ef MY_SNAPSHOT 00:09:14.941 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cfa2c222-2053-46ab-ae92-4600e014aa4d 00:09:14.941 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c2057783-f2a6-4adc-940a-02bcf03b59ef 30 00:09:15.199 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cfa2c222-2053-46ab-ae92-4600e014aa4d MY_CLONE 00:09:15.765 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ab39ac54-5e42-43af-917f-be47c5e3555e 00:09:15.765 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ab39ac54-5e42-43af-917f-be47c5e3555e 00:09:16.332 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 78929 00:09:24.485 Initializing NVMe Controllers 00:09:24.485 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:24.485 Controller IO queue size 128, less than required. 00:09:24.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:24.485 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:24.485 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:24.485 Initialization complete. Launching workers. 00:09:24.485 ======================================================== 00:09:24.485 Latency(us) 00:09:24.485 Device Information : IOPS MiB/s Average min max 00:09:24.485 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11002.50 42.98 11633.80 2616.57 80286.32 00:09:24.486 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11037.60 43.12 11602.00 3311.65 82667.67 00:09:24.486 ======================================================== 00:09:24.486 Total : 22040.10 86.09 11617.88 2616.57 82667.67 00:09:24.486 00:09:24.486 18:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.486 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c2057783-f2a6-4adc-940a-02bcf03b59ef 00:09:24.743 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2133798a-d9ad-4cd0-b884-e8962bf8c615 00:09:25.001 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.002 rmmod nvme_tcp 00:09:25.002 rmmod nvme_fabrics 00:09:25.002 rmmod nvme_keyring 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 78789 ']' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 78789 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 78789 ']' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 78789 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78789 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.002 killing process with pid 78789 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78789' 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 78789 00:09:25.002 18:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 78789 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:25.568 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:25.569 ************************************ 00:09:25.569 END TEST nvmf_lvol 00:09:25.569 ************************************ 00:09:25.569 00:09:25.569 real 0m15.895s 00:09:25.569 user 1m5.335s 00:09:25.569 sys 0m3.962s 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.569 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.827 ************************************ 00:09:25.827 START TEST nvmf_lvs_grow 00:09:25.827 ************************************ 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:25.827 * Looking for test storage... 00:09:25.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:25.827 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:25.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.828 --rc genhtml_branch_coverage=1 00:09:25.828 --rc genhtml_function_coverage=1 00:09:25.828 --rc genhtml_legend=1 00:09:25.828 --rc geninfo_all_blocks=1 00:09:25.828 --rc geninfo_unexecuted_blocks=1 00:09:25.828 00:09:25.828 ' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:25.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.828 --rc genhtml_branch_coverage=1 00:09:25.828 --rc genhtml_function_coverage=1 00:09:25.828 --rc genhtml_legend=1 00:09:25.828 --rc geninfo_all_blocks=1 00:09:25.828 --rc geninfo_unexecuted_blocks=1 00:09:25.828 00:09:25.828 ' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:25.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.828 --rc genhtml_branch_coverage=1 00:09:25.828 --rc genhtml_function_coverage=1 00:09:25.828 --rc genhtml_legend=1 00:09:25.828 --rc geninfo_all_blocks=1 00:09:25.828 --rc geninfo_unexecuted_blocks=1 00:09:25.828 00:09:25.828 ' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:25.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.828 --rc genhtml_branch_coverage=1 00:09:25.828 --rc genhtml_function_coverage=1 00:09:25.828 --rc genhtml_legend=1 00:09:25.828 --rc geninfo_all_blocks=1 00:09:25.828 --rc geninfo_unexecuted_blocks=1 00:09:25.828 00:09:25.828 ' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.828 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:26.087 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:26.088 Cannot find device "nvmf_init_br" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:26.088 Cannot find device "nvmf_init_br2" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:26.088 Cannot find device "nvmf_tgt_br" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.088 Cannot find device "nvmf_tgt_br2" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:26.088 Cannot find device "nvmf_init_br" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:26.088 Cannot find device "nvmf_init_br2" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:26.088 Cannot find device "nvmf_tgt_br" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:26.088 Cannot find device "nvmf_tgt_br2" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:26.088 Cannot find device "nvmf_br" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:26.088 Cannot find device "nvmf_init_if" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:26.088 Cannot find device "nvmf_init_if2" 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:26.088 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:26.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:09:26.347 00:09:26.347 --- 10.0.0.3 ping statistics --- 00:09:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.347 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:26.347 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:26.347 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:26.347 00:09:26.347 --- 10.0.0.4 ping statistics --- 00:09:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.347 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:26.347 00:09:26.347 --- 10.0.0.1 ping statistics --- 00:09:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.347 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:26.347 18:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:26.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:09:26.347 00:09:26.347 --- 10.0.0.2 ping statistics --- 00:09:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.347 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=79344 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 79344 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 79344 ']' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.347 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.605 [2024-12-14 18:29:42.111123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:26.605 [2024-12-14 18:29:42.111237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.605 [2024-12-14 18:29:42.253368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.864 [2024-12-14 18:29:42.369265] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.864 [2024-12-14 18:29:42.369337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.864 [2024-12-14 18:29:42.369362] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.864 [2024-12-14 18:29:42.369373] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.864 [2024-12-14 18:29:42.369384] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.864 [2024-12-14 18:29:42.369417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.430 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.430 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:27.430 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:27.430 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.430 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.689 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.947 [2024-12-14 18:29:43.553563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.947 ************************************ 00:09:27.947 START TEST lvs_grow_clean 00:09:27.947 ************************************ 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.947 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.205 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:28.205 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:28.772 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:28.772 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:28.772 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.030 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.030 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.030 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 lvol 150 00:09:29.288 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1bac1249-9a20-42fe-8beb-9c0f754b817d 00:09:29.288 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.288 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.545 [2024-12-14 18:29:45.178677] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.545 [2024-12-14 18:29:45.178769] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.545 true 00:09:29.545 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:29.545 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:29.803 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:29.803 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.061 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1bac1249-9a20-42fe-8beb-9c0f754b817d 00:09:30.627 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:30.627 [2024-12-14 18:29:46.312244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.627 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79521 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79521 /var/tmp/bdevperf.sock 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 79521 ']' 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.885 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.885 [2024-12-14 18:29:46.626481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:30.885 [2024-12-14 18:29:46.626576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79521 ] 00:09:31.142 [2024-12-14 18:29:46.759552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.142 [2024-12-14 18:29:46.854152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.400 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.400 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:31.400 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.658 Nvme0n1 00:09:31.658 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:31.916 [ 00:09:31.916 { 00:09:31.916 "aliases": [ 00:09:31.917 "1bac1249-9a20-42fe-8beb-9c0f754b817d" 00:09:31.917 ], 00:09:31.917 "assigned_rate_limits": { 00:09:31.917 "r_mbytes_per_sec": 0, 00:09:31.917 "rw_ios_per_sec": 0, 00:09:31.917 "rw_mbytes_per_sec": 0, 00:09:31.917 "w_mbytes_per_sec": 0 00:09:31.917 }, 00:09:31.917 "block_size": 4096, 00:09:31.917 "claimed": false, 00:09:31.917 "driver_specific": { 00:09:31.917 "mp_policy": "active_passive", 00:09:31.917 "nvme": [ 00:09:31.917 { 00:09:31.917 "ctrlr_data": { 00:09:31.917 "ana_reporting": false, 00:09:31.917 "cntlid": 1, 00:09:31.917 "firmware_revision": "24.09.1", 00:09:31.917 "model_number": "SPDK bdev Controller", 00:09:31.917 "multi_ctrlr": true, 00:09:31.917 "oacs": { 00:09:31.917 "firmware": 0, 00:09:31.917 "format": 0, 00:09:31.917 "ns_manage": 0, 00:09:31.917 "security": 0 00:09:31.917 }, 00:09:31.917 "serial_number": "SPDK0", 00:09:31.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.917 "vendor_id": "0x8086" 00:09:31.917 }, 00:09:31.917 "ns_data": { 00:09:31.917 "can_share": true, 00:09:31.917 "id": 1 00:09:31.917 }, 00:09:31.917 "trid": { 00:09:31.917 "adrfam": "IPv4", 00:09:31.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.917 "traddr": "10.0.0.3", 00:09:31.917 "trsvcid": "4420", 00:09:31.917 "trtype": "TCP" 00:09:31.917 }, 00:09:31.917 "vs": { 00:09:31.917 "nvme_version": "1.3" 00:09:31.917 } 00:09:31.917 } 00:09:31.917 ] 00:09:31.917 }, 00:09:31.917 "memory_domains": [ 00:09:31.917 { 00:09:31.917 "dma_device_id": "system", 00:09:31.917 "dma_device_type": 1 00:09:31.917 } 00:09:31.917 ], 00:09:31.917 "name": "Nvme0n1", 00:09:31.917 "num_blocks": 38912, 00:09:31.917 "numa_id": -1, 00:09:31.917 "product_name": "NVMe disk", 00:09:31.917 "supported_io_types": { 00:09:31.917 "abort": true, 00:09:31.917 "compare": true, 00:09:31.917 "compare_and_write": true, 00:09:31.917 "copy": true, 00:09:31.917 "flush": true, 00:09:31.917 "get_zone_info": false, 00:09:31.917 "nvme_admin": true, 00:09:31.917 "nvme_io": true, 00:09:31.917 "nvme_io_md": false, 00:09:31.917 "nvme_iov_md": false, 00:09:31.917 "read": true, 00:09:31.917 "reset": true, 00:09:31.917 "seek_data": false, 00:09:31.917 "seek_hole": false, 00:09:31.917 "unmap": true, 00:09:31.917 "write": true, 00:09:31.917 "write_zeroes": true, 00:09:31.917 "zcopy": false, 00:09:31.917 "zone_append": false, 00:09:31.917 "zone_management": false 00:09:31.917 }, 00:09:31.917 "uuid": "1bac1249-9a20-42fe-8beb-9c0f754b817d", 00:09:31.917 "zoned": false 00:09:31.917 } 00:09:31.917 ] 00:09:31.917 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79551 00:09:31.917 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.917 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:32.175 Running I/O for 10 seconds... 00:09:33.113 Latency(us) 00:09:33.113 [2024-12-14T18:29:48.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.113 Nvme0n1 : 1.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:09:33.113 [2024-12-14T18:29:48.866Z] =================================================================================================================== 00:09:33.113 [2024-12-14T18:29:48.866Z] Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:09:33.113 00:09:34.049 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:34.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.049 Nvme0n1 : 2.00 9089.50 35.51 0.00 0.00 0.00 0.00 0.00 00:09:34.049 [2024-12-14T18:29:49.802Z] =================================================================================================================== 00:09:34.049 [2024-12-14T18:29:49.802Z] Total : 9089.50 35.51 0.00 0.00 0.00 0.00 0.00 00:09:34.049 00:09:34.307 true 00:09:34.307 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:34.307 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.566 18:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.566 18:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.566 18:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 79551 00:09:35.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.133 Nvme0n1 : 3.00 8946.33 34.95 0.00 0.00 0.00 0.00 0.00 00:09:35.133 [2024-12-14T18:29:50.886Z] =================================================================================================================== 00:09:35.133 [2024-12-14T18:29:50.886Z] Total : 8946.33 34.95 0.00 0.00 0.00 0.00 0.00 00:09:35.133 00:09:36.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.069 Nvme0n1 : 4.00 9022.75 35.25 0.00 0.00 0.00 0.00 0.00 00:09:36.069 [2024-12-14T18:29:51.822Z] =================================================================================================================== 00:09:36.069 [2024-12-14T18:29:51.822Z] Total : 9022.75 35.25 0.00 0.00 0.00 0.00 0.00 00:09:36.069 00:09:37.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.004 Nvme0n1 : 5.00 9029.60 35.27 0.00 0.00 0.00 0.00 0.00 00:09:37.004 [2024-12-14T18:29:52.757Z] =================================================================================================================== 00:09:37.004 [2024-12-14T18:29:52.757Z] Total : 9029.60 35.27 0.00 0.00 0.00 0.00 0.00 00:09:37.004 00:09:37.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.967 Nvme0n1 : 6.00 8868.83 34.64 0.00 0.00 0.00 0.00 0.00 00:09:37.967 [2024-12-14T18:29:53.720Z] =================================================================================================================== 00:09:37.967 [2024-12-14T18:29:53.720Z] Total : 8868.83 34.64 0.00 0.00 0.00 0.00 0.00 00:09:37.967 00:09:39.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.343 Nvme0n1 : 7.00 8865.71 34.63 0.00 0.00 0.00 0.00 0.00 00:09:39.343 [2024-12-14T18:29:55.096Z] =================================================================================================================== 00:09:39.343 [2024-12-14T18:29:55.096Z] Total : 8865.71 34.63 0.00 0.00 0.00 0.00 0.00 00:09:39.343 00:09:40.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.276 Nvme0n1 : 8.00 8899.00 34.76 0.00 0.00 0.00 0.00 0.00 00:09:40.276 [2024-12-14T18:29:56.029Z] =================================================================================================================== 00:09:40.276 [2024-12-14T18:29:56.029Z] Total : 8899.00 34.76 0.00 0.00 0.00 0.00 0.00 00:09:40.276 00:09:41.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.211 Nvme0n1 : 9.00 8881.33 34.69 0.00 0.00 0.00 0.00 0.00 00:09:41.211 [2024-12-14T18:29:56.964Z] =================================================================================================================== 00:09:41.211 [2024-12-14T18:29:56.964Z] Total : 8881.33 34.69 0.00 0.00 0.00 0.00 0.00 00:09:41.211 00:09:42.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.146 Nvme0n1 : 10.00 8884.80 34.71 0.00 0.00 0.00 0.00 0.00 00:09:42.146 [2024-12-14T18:29:57.899Z] =================================================================================================================== 00:09:42.146 [2024-12-14T18:29:57.899Z] Total : 8884.80 34.71 0.00 0.00 0.00 0.00 0.00 00:09:42.146 00:09:42.146 00:09:42.146 Latency(us) 00:09:42.146 [2024-12-14T18:29:57.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.146 Nvme0n1 : 10.00 8882.64 34.70 0.00 0.00 14404.33 5630.14 129642.12 00:09:42.146 [2024-12-14T18:29:57.899Z] =================================================================================================================== 00:09:42.146 [2024-12-14T18:29:57.899Z] Total : 8882.64 34.70 0.00 0.00 14404.33 5630.14 129642.12 00:09:42.146 { 00:09:42.146 "results": [ 00:09:42.146 { 00:09:42.146 "job": "Nvme0n1", 00:09:42.146 "core_mask": "0x2", 00:09:42.146 "workload": "randwrite", 00:09:42.146 "status": "finished", 00:09:42.146 "queue_depth": 128, 00:09:42.146 "io_size": 4096, 00:09:42.146 "runtime": 10.00277, 00:09:42.146 "iops": 8882.639508856048, 00:09:42.146 "mibps": 34.697810581468936, 00:09:42.146 "io_failed": 0, 00:09:42.146 "io_timeout": 0, 00:09:42.146 "avg_latency_us": 14404.330326092406, 00:09:42.146 "min_latency_us": 5630.138181818182, 00:09:42.146 "max_latency_us": 129642.12363636364 00:09:42.146 } 00:09:42.146 ], 00:09:42.146 "core_count": 1 00:09:42.146 } 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79521 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 79521 ']' 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 79521 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79521 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79521' 00:09:42.146 killing process with pid 79521 00:09:42.146 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.146 00:09:42.146 Latency(us) 00:09:42.146 [2024-12-14T18:29:57.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.146 [2024-12-14T18:29:57.899Z] =================================================================================================================== 00:09:42.146 [2024-12-14T18:29:57.899Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 79521 00:09:42.146 18:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 79521 00:09:42.405 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:42.663 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.922 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:42.922 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.181 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.181 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:43.181 18:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.439 [2024-12-14 18:29:58.998214] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:43.440 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:43.698 2024/12/14 18:29:59 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:74ff7155-d9cd-42c3-a7e6-ee9f4655faa6], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:43.698 request: 00:09:43.698 { 00:09:43.698 "method": "bdev_lvol_get_lvstores", 00:09:43.698 "params": { 00:09:43.698 "uuid": "74ff7155-d9cd-42c3-a7e6-ee9f4655faa6" 00:09:43.698 } 00:09:43.698 } 00:09:43.698 Got JSON-RPC error response 00:09:43.698 GoRPCClient: error on JSON-RPC call 00:09:43.698 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:43.698 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.698 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.698 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.698 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.957 aio_bdev 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1bac1249-9a20-42fe-8beb-9c0f754b817d 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1bac1249-9a20-42fe-8beb-9c0f754b817d 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.957 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.215 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bac1249-9a20-42fe-8beb-9c0f754b817d -t 2000 00:09:44.474 [ 00:09:44.474 { 00:09:44.474 "aliases": [ 00:09:44.474 "lvs/lvol" 00:09:44.474 ], 00:09:44.474 "assigned_rate_limits": { 00:09:44.474 "r_mbytes_per_sec": 0, 00:09:44.474 "rw_ios_per_sec": 0, 00:09:44.474 "rw_mbytes_per_sec": 0, 00:09:44.474 "w_mbytes_per_sec": 0 00:09:44.474 }, 00:09:44.474 "block_size": 4096, 00:09:44.474 "claimed": false, 00:09:44.474 "driver_specific": { 00:09:44.474 "lvol": { 00:09:44.474 "base_bdev": "aio_bdev", 00:09:44.474 "clone": false, 00:09:44.474 "esnap_clone": false, 00:09:44.474 "lvol_store_uuid": "74ff7155-d9cd-42c3-a7e6-ee9f4655faa6", 00:09:44.474 "num_allocated_clusters": 38, 00:09:44.474 "snapshot": false, 00:09:44.474 "thin_provision": false 00:09:44.474 } 00:09:44.474 }, 00:09:44.474 "name": "1bac1249-9a20-42fe-8beb-9c0f754b817d", 00:09:44.474 "num_blocks": 38912, 00:09:44.474 "product_name": "Logical Volume", 00:09:44.474 "supported_io_types": { 00:09:44.474 "abort": false, 00:09:44.474 "compare": false, 00:09:44.474 "compare_and_write": false, 00:09:44.474 "copy": false, 00:09:44.474 "flush": false, 00:09:44.474 "get_zone_info": false, 00:09:44.474 "nvme_admin": false, 00:09:44.474 "nvme_io": false, 00:09:44.474 "nvme_io_md": false, 00:09:44.474 "nvme_iov_md": false, 00:09:44.474 "read": true, 00:09:44.474 "reset": true, 00:09:44.474 "seek_data": true, 00:09:44.474 "seek_hole": true, 00:09:44.474 "unmap": true, 00:09:44.474 "write": true, 00:09:44.474 "write_zeroes": true, 00:09:44.474 "zcopy": false, 00:09:44.474 "zone_append": false, 00:09:44.474 "zone_management": false 00:09:44.474 }, 00:09:44.474 "uuid": "1bac1249-9a20-42fe-8beb-9c0f754b817d", 00:09:44.474 "zoned": false 00:09:44.474 } 00:09:44.474 ] 00:09:44.474 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:44.474 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:44.474 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.733 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.733 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:44.733 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.992 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.992 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1bac1249-9a20-42fe-8beb-9c0f754b817d 00:09:45.251 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74ff7155-d9cd-42c3-a7e6-ee9f4655faa6 00:09:45.511 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.769 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.336 ************************************ 00:09:46.336 END TEST lvs_grow_clean 00:09:46.336 ************************************ 00:09:46.336 00:09:46.336 real 0m18.361s 00:09:46.336 user 0m17.293s 00:09:46.336 sys 0m2.407s 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.336 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.336 ************************************ 00:09:46.336 START TEST lvs_grow_dirty 00:09:46.336 ************************************ 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.336 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.905 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.905 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.905 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:09:46.905 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.905 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:09:47.164 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:47.164 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:47.164 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac lvol 150 00:09:47.731 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=61afd76c-39b1-45be-9800-a1fa86171211 00:09:47.731 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:47.731 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:47.731 [2024-12-14 18:30:03.469368] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:47.731 [2024-12-14 18:30:03.470002] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:47.731 true 00:09:47.990 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:09:47.990 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:47.990 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:47.990 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.248 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61afd76c-39b1-45be-9800-a1fa86171211 00:09:48.507 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:48.765 [2024-12-14 18:30:04.421863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.765 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79954 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79954 /var/tmp/bdevperf.sock 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 79954 ']' 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.024 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.024 [2024-12-14 18:30:04.713256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:49.024 [2024-12-14 18:30:04.713324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79954 ] 00:09:49.283 [2024-12-14 18:30:04.850165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.283 [2024-12-14 18:30:04.934803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.540 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.540 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:49.540 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:49.798 Nvme0n1 00:09:49.798 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:50.057 [ 00:09:50.057 { 00:09:50.057 "aliases": [ 00:09:50.057 "61afd76c-39b1-45be-9800-a1fa86171211" 00:09:50.057 ], 00:09:50.057 "assigned_rate_limits": { 00:09:50.057 "r_mbytes_per_sec": 0, 00:09:50.057 "rw_ios_per_sec": 0, 00:09:50.057 "rw_mbytes_per_sec": 0, 00:09:50.057 "w_mbytes_per_sec": 0 00:09:50.057 }, 00:09:50.057 "block_size": 4096, 00:09:50.057 "claimed": false, 00:09:50.057 "driver_specific": { 00:09:50.057 "mp_policy": "active_passive", 00:09:50.057 "nvme": [ 00:09:50.057 { 00:09:50.057 "ctrlr_data": { 00:09:50.057 "ana_reporting": false, 00:09:50.057 "cntlid": 1, 00:09:50.057 "firmware_revision": "24.09.1", 00:09:50.057 "model_number": "SPDK bdev Controller", 00:09:50.057 "multi_ctrlr": true, 00:09:50.057 "oacs": { 00:09:50.057 "firmware": 0, 00:09:50.057 "format": 0, 00:09:50.057 "ns_manage": 0, 00:09:50.057 "security": 0 00:09:50.057 }, 00:09:50.057 "serial_number": "SPDK0", 00:09:50.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.057 "vendor_id": "0x8086" 00:09:50.057 }, 00:09:50.057 "ns_data": { 00:09:50.057 "can_share": true, 00:09:50.057 "id": 1 00:09:50.057 }, 00:09:50.057 "trid": { 00:09:50.057 "adrfam": "IPv4", 00:09:50.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.057 "traddr": "10.0.0.3", 00:09:50.058 "trsvcid": "4420", 00:09:50.058 "trtype": "TCP" 00:09:50.058 }, 00:09:50.058 "vs": { 00:09:50.058 "nvme_version": "1.3" 00:09:50.058 } 00:09:50.058 } 00:09:50.058 ] 00:09:50.058 }, 00:09:50.058 "memory_domains": [ 00:09:50.058 { 00:09:50.058 "dma_device_id": "system", 00:09:50.058 "dma_device_type": 1 00:09:50.058 } 00:09:50.058 ], 00:09:50.058 "name": "Nvme0n1", 00:09:50.058 "num_blocks": 38912, 00:09:50.058 "numa_id": -1, 00:09:50.058 "product_name": "NVMe disk", 00:09:50.058 "supported_io_types": { 00:09:50.058 "abort": true, 00:09:50.058 "compare": true, 00:09:50.058 "compare_and_write": true, 00:09:50.058 "copy": true, 00:09:50.058 "flush": true, 00:09:50.058 "get_zone_info": false, 00:09:50.058 "nvme_admin": true, 00:09:50.058 "nvme_io": true, 00:09:50.058 "nvme_io_md": false, 00:09:50.058 "nvme_iov_md": false, 00:09:50.058 "read": true, 00:09:50.058 "reset": true, 00:09:50.058 "seek_data": false, 00:09:50.058 "seek_hole": false, 00:09:50.058 "unmap": true, 00:09:50.058 "write": true, 00:09:50.058 "write_zeroes": true, 00:09:50.058 "zcopy": false, 00:09:50.058 "zone_append": false, 00:09:50.058 "zone_management": false 00:09:50.058 }, 00:09:50.058 "uuid": "61afd76c-39b1-45be-9800-a1fa86171211", 00:09:50.058 "zoned": false 00:09:50.058 } 00:09:50.058 ] 00:09:50.058 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.058 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79983 00:09:50.058 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:50.058 Running I/O for 10 seconds... 00:09:50.994 Latency(us) 00:09:50.994 [2024-12-14T18:30:06.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.994 Nvme0n1 : 1.00 9697.00 37.88 0.00 0.00 0.00 0.00 0.00 00:09:50.994 [2024-12-14T18:30:06.747Z] =================================================================================================================== 00:09:50.994 [2024-12-14T18:30:06.747Z] Total : 9697.00 37.88 0.00 0.00 0.00 0.00 0.00 00:09:50.994 00:09:51.929 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:09:52.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.187 Nvme0n1 : 2.00 9687.50 37.84 0.00 0.00 0.00 0.00 0.00 00:09:52.187 [2024-12-14T18:30:07.940Z] =================================================================================================================== 00:09:52.187 [2024-12-14T18:30:07.940Z] Total : 9687.50 37.84 0.00 0.00 0.00 0.00 0.00 00:09:52.187 00:09:52.446 true 00:09:52.446 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:09:52.446 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:52.710 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:52.710 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:52.710 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 79983 00:09:52.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.981 Nvme0n1 : 3.00 9534.33 37.24 0.00 0.00 0.00 0.00 0.00 00:09:52.981 [2024-12-14T18:30:08.734Z] =================================================================================================================== 00:09:52.981 [2024-12-14T18:30:08.734Z] Total : 9534.33 37.24 0.00 0.00 0.00 0.00 0.00 00:09:52.981 00:09:54.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.356 Nvme0n1 : 4.00 9581.75 37.43 0.00 0.00 0.00 0.00 0.00 00:09:54.356 [2024-12-14T18:30:10.109Z] =================================================================================================================== 00:09:54.356 [2024-12-14T18:30:10.109Z] Total : 9581.75 37.43 0.00 0.00 0.00 0.00 0.00 00:09:54.356 00:09:55.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.292 Nvme0n1 : 5.00 9562.80 37.35 0.00 0.00 0.00 0.00 0.00 00:09:55.292 [2024-12-14T18:30:11.045Z] =================================================================================================================== 00:09:55.292 [2024-12-14T18:30:11.045Z] Total : 9562.80 37.35 0.00 0.00 0.00 0.00 0.00 00:09:55.292 00:09:56.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.227 Nvme0n1 : 6.00 9608.67 37.53 0.00 0.00 0.00 0.00 0.00 00:09:56.227 [2024-12-14T18:30:11.980Z] =================================================================================================================== 00:09:56.227 [2024-12-14T18:30:11.980Z] Total : 9608.67 37.53 0.00 0.00 0.00 0.00 0.00 00:09:56.227 00:09:57.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.161 Nvme0n1 : 7.00 9579.00 37.42 0.00 0.00 0.00 0.00 0.00 00:09:57.161 [2024-12-14T18:30:12.914Z] =================================================================================================================== 00:09:57.161 [2024-12-14T18:30:12.914Z] Total : 9579.00 37.42 0.00 0.00 0.00 0.00 0.00 00:09:57.162 00:09:58.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.096 Nvme0n1 : 8.00 9506.25 37.13 0.00 0.00 0.00 0.00 0.00 00:09:58.096 [2024-12-14T18:30:13.849Z] =================================================================================================================== 00:09:58.096 [2024-12-14T18:30:13.849Z] Total : 9506.25 37.13 0.00 0.00 0.00 0.00 0.00 00:09:58.096 00:09:59.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.031 Nvme0n1 : 9.00 9482.33 37.04 0.00 0.00 0.00 0.00 0.00 00:09:59.031 [2024-12-14T18:30:14.784Z] =================================================================================================================== 00:09:59.031 [2024-12-14T18:30:14.784Z] Total : 9482.33 37.04 0.00 0.00 0.00 0.00 0.00 00:09:59.031 00:09:59.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.965 Nvme0n1 : 10.00 9477.70 37.02 0.00 0.00 0.00 0.00 0.00 00:09:59.965 [2024-12-14T18:30:15.718Z] =================================================================================================================== 00:09:59.965 [2024-12-14T18:30:15.718Z] Total : 9477.70 37.02 0.00 0.00 0.00 0.00 0.00 00:09:59.965 00:09:59.965 00:09:59.965 Latency(us) 00:09:59.965 [2024-12-14T18:30:15.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.965 Nvme0n1 : 10.01 9481.11 37.04 0.00 0.00 13492.25 4587.52 91512.09 00:09:59.965 [2024-12-14T18:30:15.718Z] =================================================================================================================== 00:09:59.965 [2024-12-14T18:30:15.718Z] Total : 9481.11 37.04 0.00 0.00 13492.25 4587.52 91512.09 00:09:59.965 { 00:09:59.965 "results": [ 00:09:59.965 { 00:09:59.965 "job": "Nvme0n1", 00:09:59.965 "core_mask": "0x2", 00:09:59.965 "workload": "randwrite", 00:09:59.965 "status": "finished", 00:09:59.965 "queue_depth": 128, 00:09:59.965 "io_size": 4096, 00:09:59.965 "runtime": 10.009903, 00:09:59.965 "iops": 9481.110855919384, 00:09:59.965 "mibps": 37.03558928093509, 00:09:59.965 "io_failed": 0, 00:09:59.965 "io_timeout": 0, 00:09:59.965 "avg_latency_us": 13492.245304826358, 00:09:59.965 "min_latency_us": 4587.52, 00:09:59.965 "max_latency_us": 91512.08727272728 00:09:59.965 } 00:09:59.965 ], 00:09:59.965 "core_count": 1 00:09:59.965 } 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79954 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 79954 ']' 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 79954 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79954 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:00.223 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:00.223 killing process with pid 79954 00:10:00.224 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79954' 00:10:00.224 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 79954 00:10:00.224 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.224 00:10:00.224 Latency(us) 00:10:00.224 [2024-12-14T18:30:15.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.224 [2024-12-14T18:30:15.977Z] =================================================================================================================== 00:10:00.224 [2024-12-14T18:30:15.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.224 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 79954 00:10:00.482 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:00.741 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.999 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:00.999 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 79344 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 79344 00:10:01.257 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 79344 Killed "${NVMF_APP[@]}" "$@" 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=80152 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 80152 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 80152 ']' 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.257 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.258 18:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.258 [2024-12-14 18:30:16.902590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.258 [2024-12-14 18:30:16.902699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.516 [2024-12-14 18:30:17.039320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.516 [2024-12-14 18:30:17.119582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.516 [2024-12-14 18:30:17.119653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.516 [2024-12-14 18:30:17.119664] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.516 [2024-12-14 18:30:17.119671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.516 [2024-12-14 18:30:17.119677] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.516 [2024-12-14 18:30:17.119713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.451 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.451 [2024-12-14 18:30:18.129927] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:02.451 [2024-12-14 18:30:18.130793] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:02.451 [2024-12-14 18:30:18.131852] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 61afd76c-39b1-45be-9800-a1fa86171211 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=61afd76c-39b1-45be-9800-a1fa86171211 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.451 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.709 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61afd76c-39b1-45be-9800-a1fa86171211 -t 2000 00:10:02.967 [ 00:10:02.967 { 00:10:02.967 "aliases": [ 00:10:02.967 "lvs/lvol" 00:10:02.967 ], 00:10:02.967 "assigned_rate_limits": { 00:10:02.967 "r_mbytes_per_sec": 0, 00:10:02.967 "rw_ios_per_sec": 0, 00:10:02.967 "rw_mbytes_per_sec": 0, 00:10:02.967 "w_mbytes_per_sec": 0 00:10:02.967 }, 00:10:02.967 "block_size": 4096, 00:10:02.967 "claimed": false, 00:10:02.967 "driver_specific": { 00:10:02.967 "lvol": { 00:10:02.967 "base_bdev": "aio_bdev", 00:10:02.967 "clone": false, 00:10:02.967 "esnap_clone": false, 00:10:02.967 "lvol_store_uuid": "3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac", 00:10:02.967 "num_allocated_clusters": 38, 00:10:02.967 "snapshot": false, 00:10:02.967 "thin_provision": false 00:10:02.967 } 00:10:02.967 }, 00:10:02.967 "name": "61afd76c-39b1-45be-9800-a1fa86171211", 00:10:02.967 "num_blocks": 38912, 00:10:02.967 "product_name": "Logical Volume", 00:10:02.967 "supported_io_types": { 00:10:02.967 "abort": false, 00:10:02.967 "compare": false, 00:10:02.967 "compare_and_write": false, 00:10:02.967 "copy": false, 00:10:02.967 "flush": false, 00:10:02.967 "get_zone_info": false, 00:10:02.967 "nvme_admin": false, 00:10:02.967 "nvme_io": false, 00:10:02.967 "nvme_io_md": false, 00:10:02.967 "nvme_iov_md": false, 00:10:02.967 "read": true, 00:10:02.968 "reset": true, 00:10:02.968 "seek_data": true, 00:10:02.968 "seek_hole": true, 00:10:02.968 "unmap": true, 00:10:02.968 "write": true, 00:10:02.968 "write_zeroes": true, 00:10:02.968 "zcopy": false, 00:10:02.968 "zone_append": false, 00:10:02.968 "zone_management": false 00:10:02.968 }, 00:10:02.968 "uuid": "61afd76c-39b1-45be-9800-a1fa86171211", 00:10:02.968 "zoned": false 00:10:02.968 } 00:10:02.968 ] 00:10:02.968 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:02.968 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:02.968 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:03.226 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:03.226 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:03.226 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:03.485 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:03.485 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.743 [2024-12-14 18:30:19.363485] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:03.743 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:04.002 2024/12/14 18:30:19 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:04.002 request: 00:10:04.002 { 00:10:04.002 "method": "bdev_lvol_get_lvstores", 00:10:04.002 "params": { 00:10:04.002 "uuid": "3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac" 00:10:04.002 } 00:10:04.002 } 00:10:04.002 Got JSON-RPC error response 00:10:04.002 GoRPCClient: error on JSON-RPC call 00:10:04.002 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:04.002 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.002 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.002 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.002 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.260 aio_bdev 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61afd76c-39b1-45be-9800-a1fa86171211 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=61afd76c-39b1-45be-9800-a1fa86171211 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.260 18:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:04.528 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61afd76c-39b1-45be-9800-a1fa86171211 -t 2000 00:10:04.803 [ 00:10:04.803 { 00:10:04.803 "aliases": [ 00:10:04.803 "lvs/lvol" 00:10:04.803 ], 00:10:04.803 "assigned_rate_limits": { 00:10:04.803 "r_mbytes_per_sec": 0, 00:10:04.803 "rw_ios_per_sec": 0, 00:10:04.803 "rw_mbytes_per_sec": 0, 00:10:04.803 "w_mbytes_per_sec": 0 00:10:04.803 }, 00:10:04.803 "block_size": 4096, 00:10:04.803 "claimed": false, 00:10:04.803 "driver_specific": { 00:10:04.803 "lvol": { 00:10:04.803 "base_bdev": "aio_bdev", 00:10:04.803 "clone": false, 00:10:04.803 "esnap_clone": false, 00:10:04.803 "lvol_store_uuid": "3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac", 00:10:04.803 "num_allocated_clusters": 38, 00:10:04.803 "snapshot": false, 00:10:04.803 "thin_provision": false 00:10:04.803 } 00:10:04.803 }, 00:10:04.803 "name": "61afd76c-39b1-45be-9800-a1fa86171211", 00:10:04.803 "num_blocks": 38912, 00:10:04.803 "product_name": "Logical Volume", 00:10:04.803 "supported_io_types": { 00:10:04.803 "abort": false, 00:10:04.803 "compare": false, 00:10:04.803 "compare_and_write": false, 00:10:04.803 "copy": false, 00:10:04.803 "flush": false, 00:10:04.803 "get_zone_info": false, 00:10:04.803 "nvme_admin": false, 00:10:04.803 "nvme_io": false, 00:10:04.803 "nvme_io_md": false, 00:10:04.803 "nvme_iov_md": false, 00:10:04.803 "read": true, 00:10:04.803 "reset": true, 00:10:04.803 "seek_data": true, 00:10:04.803 "seek_hole": true, 00:10:04.803 "unmap": true, 00:10:04.803 "write": true, 00:10:04.803 "write_zeroes": true, 00:10:04.803 "zcopy": false, 00:10:04.803 "zone_append": false, 00:10:04.803 "zone_management": false 00:10:04.803 }, 00:10:04.803 "uuid": "61afd76c-39b1-45be-9800-a1fa86171211", 00:10:04.803 "zoned": false 00:10:04.803 } 00:10:04.803 ] 00:10:04.804 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:04.804 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:04.804 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:05.062 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:05.062 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:05.062 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:05.321 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:05.321 18:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 61afd76c-39b1-45be-9800-a1fa86171211 00:10:05.579 18:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ebb41c3-1b1d-46a8-9d85-64ef0cde15ac 00:10:05.837 18:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:06.095 18:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:06.354 00:10:06.354 real 0m20.086s 00:10:06.354 user 0m41.595s 00:10:06.354 sys 0m8.131s 00:10:06.354 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.354 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:06.354 ************************************ 00:10:06.354 END TEST lvs_grow_dirty 00:10:06.354 ************************************ 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:06.613 nvmf_trace.0 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:06.613 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:06.872 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.872 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:06.872 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.872 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.872 rmmod nvme_tcp 00:10:07.130 rmmod nvme_fabrics 00:10:07.130 rmmod nvme_keyring 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 80152 ']' 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 80152 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 80152 ']' 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 80152 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80152 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80152' 00:10:07.130 killing process with pid 80152 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 80152 00:10:07.130 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 80152 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:07.389 18:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:07.389 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:07.648 00:10:07.648 real 0m41.886s 00:10:07.648 user 1m5.847s 00:10:07.648 sys 0m11.729s 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 ************************************ 00:10:07.648 END TEST nvmf_lvs_grow 00:10:07.648 ************************************ 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 ************************************ 00:10:07.648 START TEST nvmf_bdev_io_wait 00:10:07.648 ************************************ 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.648 * Looking for test storage... 00:10:07.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.648 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.908 --rc genhtml_branch_coverage=1 00:10:07.908 --rc genhtml_function_coverage=1 00:10:07.908 --rc genhtml_legend=1 00:10:07.908 --rc geninfo_all_blocks=1 00:10:07.908 --rc geninfo_unexecuted_blocks=1 00:10:07.908 00:10:07.908 ' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.908 --rc genhtml_branch_coverage=1 00:10:07.908 --rc genhtml_function_coverage=1 00:10:07.908 --rc genhtml_legend=1 00:10:07.908 --rc geninfo_all_blocks=1 00:10:07.908 --rc geninfo_unexecuted_blocks=1 00:10:07.908 00:10:07.908 ' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.908 --rc genhtml_branch_coverage=1 00:10:07.908 --rc genhtml_function_coverage=1 00:10:07.908 --rc genhtml_legend=1 00:10:07.908 --rc geninfo_all_blocks=1 00:10:07.908 --rc geninfo_unexecuted_blocks=1 00:10:07.908 00:10:07.908 ' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.908 --rc genhtml_branch_coverage=1 00:10:07.908 --rc genhtml_function_coverage=1 00:10:07.908 --rc genhtml_legend=1 00:10:07.908 --rc geninfo_all_blocks=1 00:10:07.908 --rc geninfo_unexecuted_blocks=1 00:10:07.908 00:10:07.908 ' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.908 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:07.909 Cannot find device "nvmf_init_br" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:07.909 Cannot find device "nvmf_init_br2" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:07.909 Cannot find device "nvmf_tgt_br" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.909 Cannot find device "nvmf_tgt_br2" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:07.909 Cannot find device "nvmf_init_br" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:07.909 Cannot find device "nvmf_init_br2" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:07.909 Cannot find device "nvmf_tgt_br" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:07.909 Cannot find device "nvmf_tgt_br2" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:07.909 Cannot find device "nvmf_br" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:07.909 Cannot find device "nvmf_init_if" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:07.909 Cannot find device "nvmf_init_if2" 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:07.909 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.168 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:10:08.168 00:10:08.168 --- 10.0.0.3 ping statistics --- 00:10:08.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.168 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.427 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.427 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:10:08.427 00:10:08.427 --- 10.0.0.4 ping statistics --- 00:10:08.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.427 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:08.427 00:10:08.427 --- 10.0.0.1 ping statistics --- 00:10:08.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.427 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:08.427 00:10:08.427 --- 10.0.0.2 ping statistics --- 00:10:08.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.427 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.427 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=80626 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 80626 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 80626 ']' 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.428 18:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.428 [2024-12-14 18:30:24.034817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.428 [2024-12-14 18:30:24.034892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.428 [2024-12-14 18:30:24.176393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.686 [2024-12-14 18:30:24.249944] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.686 [2024-12-14 18:30:24.250228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.686 [2024-12-14 18:30:24.250297] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.686 [2024-12-14 18:30:24.250370] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.686 [2024-12-14 18:30:24.250428] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.686 [2024-12-14 18:30:24.250565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.686 [2024-12-14 18:30:24.250993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.686 [2024-12-14 18:30:24.251709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.686 [2024-12-14 18:30:24.251719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.686 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 [2024-12-14 18:30:24.459419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 Malloc0 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.947 [2024-12-14 18:30:24.530106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=80666 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=80668 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:08.947 { 00:10:08.947 "params": { 00:10:08.947 "name": "Nvme$subsystem", 00:10:08.947 "trtype": "$TEST_TRANSPORT", 00:10:08.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.947 "adrfam": "ipv4", 00:10:08.947 "trsvcid": "$NVMF_PORT", 00:10:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.947 "hdgst": ${hdgst:-false}, 00:10:08.947 "ddgst": ${ddgst:-false} 00:10:08.947 }, 00:10:08.947 "method": "bdev_nvme_attach_controller" 00:10:08.947 } 00:10:08.947 EOF 00:10:08.947 )") 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=80670 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:08.947 { 00:10:08.947 "params": { 00:10:08.947 "name": "Nvme$subsystem", 00:10:08.947 "trtype": "$TEST_TRANSPORT", 00:10:08.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.947 "adrfam": "ipv4", 00:10:08.947 "trsvcid": "$NVMF_PORT", 00:10:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.947 "hdgst": ${hdgst:-false}, 00:10:08.947 "ddgst": ${ddgst:-false} 00:10:08.947 }, 00:10:08.947 "method": "bdev_nvme_attach_controller" 00:10:08.947 } 00:10:08.947 EOF 00:10:08.947 )") 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=80672 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:08.947 { 00:10:08.947 "params": { 00:10:08.947 "name": "Nvme$subsystem", 00:10:08.947 "trtype": "$TEST_TRANSPORT", 00:10:08.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.947 "adrfam": "ipv4", 00:10:08.947 "trsvcid": "$NVMF_PORT", 00:10:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.947 "hdgst": ${hdgst:-false}, 00:10:08.947 "ddgst": ${ddgst:-false} 00:10:08.947 }, 00:10:08.947 "method": "bdev_nvme_attach_controller" 00:10:08.947 } 00:10:08.947 EOF 00:10:08.947 )") 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:08.947 "params": { 00:10:08.947 "name": "Nvme1", 00:10:08.947 "trtype": "tcp", 00:10:08.947 "traddr": "10.0.0.3", 00:10:08.947 "adrfam": "ipv4", 00:10:08.947 "trsvcid": "4420", 00:10:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.947 "hdgst": false, 00:10:08.947 "ddgst": false 00:10:08.947 }, 00:10:08.947 "method": "bdev_nvme_attach_controller" 00:10:08.947 }' 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:08.947 "params": { 00:10:08.947 "name": "Nvme1", 00:10:08.947 "trtype": "tcp", 00:10:08.947 "traddr": "10.0.0.3", 00:10:08.947 "adrfam": "ipv4", 00:10:08.947 "trsvcid": "4420", 00:10:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.947 "hdgst": false, 00:10:08.947 "ddgst": false 00:10:08.947 }, 00:10:08.947 "method": "bdev_nvme_attach_controller" 00:10:08.947 }' 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:08.947 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:08.948 { 00:10:08.948 "params": { 00:10:08.948 "name": "Nvme$subsystem", 00:10:08.948 "trtype": "$TEST_TRANSPORT", 00:10:08.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.948 "adrfam": "ipv4", 00:10:08.948 "trsvcid": "$NVMF_PORT", 00:10:08.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.948 "hdgst": ${hdgst:-false}, 00:10:08.948 "ddgst": ${ddgst:-false} 00:10:08.948 }, 00:10:08.948 "method": "bdev_nvme_attach_controller" 00:10:08.948 } 00:10:08.948 EOF 00:10:08.948 )") 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:08.948 "params": { 00:10:08.948 "name": "Nvme1", 00:10:08.948 "trtype": "tcp", 00:10:08.948 "traddr": "10.0.0.3", 00:10:08.948 "adrfam": "ipv4", 00:10:08.948 "trsvcid": "4420", 00:10:08.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.948 "hdgst": false, 00:10:08.948 "ddgst": false 00:10:08.948 }, 00:10:08.948 "method": "bdev_nvme_attach_controller" 00:10:08.948 }' 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:08.948 "params": { 00:10:08.948 "name": "Nvme1", 00:10:08.948 "trtype": "tcp", 00:10:08.948 "traddr": "10.0.0.3", 00:10:08.948 "adrfam": "ipv4", 00:10:08.948 "trsvcid": "4420", 00:10:08.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.948 "hdgst": false, 00:10:08.948 "ddgst": false 00:10:08.948 }, 00:10:08.948 "method": "bdev_nvme_attach_controller" 00:10:08.948 }' 00:10:08.948 [2024-12-14 18:30:24.601662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.948 [2024-12-14 18:30:24.601751] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:08.948 [2024-12-14 18:30:24.602014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.948 [2024-12-14 18:30:24.602093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:08.948 18:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 80666 00:10:08.948 [2024-12-14 18:30:24.632123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.948 [2024-12-14 18:30:24.632222] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:08.948 [2024-12-14 18:30:24.640907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.948 [2024-12-14 18:30:24.641026] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:09.207 [2024-12-14 18:30:24.821212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.207 [2024-12-14 18:30:24.894399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.207 [2024-12-14 18:30:24.928496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.466 [2024-12-14 18:30:25.008838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:09.466 [2024-12-14 18:30:25.031111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.466 [2024-12-14 18:30:25.110300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.466 [2024-12-14 18:30:25.124700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:09.466 [2024-12-14 18:30:25.189768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:09.466 Running I/O for 1 seconds... 00:10:09.725 Running I/O for 1 seconds... 00:10:09.725 Running I/O for 1 seconds... 00:10:09.725 Running I/O for 1 seconds... 00:10:10.661 10787.00 IOPS, 42.14 MiB/s 00:10:10.661 Latency(us) 00:10:10.661 [2024-12-14T18:30:26.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.661 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:10.661 Nvme1n1 : 1.01 10860.33 42.42 0.00 0.00 11743.41 5451.40 16801.05 00:10:10.661 [2024-12-14T18:30:26.414Z] =================================================================================================================== 00:10:10.661 [2024-12-14T18:30:26.414Z] Total : 10860.33 42.42 0.00 0.00 11743.41 5451.40 16801.05 00:10:10.661 3757.00 IOPS, 14.68 MiB/s 00:10:10.661 Latency(us) 00:10:10.661 [2024-12-14T18:30:26.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.661 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:10.661 Nvme1n1 : 1.02 3804.95 14.86 0.00 0.00 33102.90 10902.81 57671.68 00:10:10.661 [2024-12-14T18:30:26.414Z] =================================================================================================================== 00:10:10.661 [2024-12-14T18:30:26.414Z] Total : 3804.95 14.86 0.00 0.00 33102.90 10902.81 57671.68 00:10:10.661 4261.00 IOPS, 16.64 MiB/s 00:10:10.661 Latency(us) 00:10:10.661 [2024-12-14T18:30:26.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.661 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:10.661 Nvme1n1 : 1.01 4384.27 17.13 0.00 0.00 29079.00 6076.97 73400.32 00:10:10.661 [2024-12-14T18:30:26.414Z] =================================================================================================================== 00:10:10.661 [2024-12-14T18:30:26.414Z] Total : 4384.27 17.13 0.00 0.00 29079.00 6076.97 73400.32 00:10:10.920 208336.00 IOPS, 813.81 MiB/s 00:10:10.920 Latency(us) 00:10:10.920 [2024-12-14T18:30:26.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.920 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:10.920 Nvme1n1 : 1.01 207226.63 809.48 0.00 0.00 613.45 394.71 3932.16 00:10:10.920 [2024-12-14T18:30:26.673Z] =================================================================================================================== 00:10:10.920 [2024-12-14T18:30:26.673Z] Total : 207226.63 809.48 0.00 0.00 613.45 394.71 3932.16 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 80668 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 80670 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 80672 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:10.920 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.179 rmmod nvme_tcp 00:10:11.179 rmmod nvme_fabrics 00:10:11.179 rmmod nvme_keyring 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 80626 ']' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 80626 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 80626 ']' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 80626 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80626 00:10:11.179 killing process with pid 80626 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80626' 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 80626 00:10:11.179 18:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 80626 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.438 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:11.696 00:10:11.696 real 0m4.015s 00:10:11.696 user 0m16.041s 00:10:11.696 sys 0m2.292s 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.696 ************************************ 00:10:11.696 END TEST nvmf_bdev_io_wait 00:10:11.696 ************************************ 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.696 ************************************ 00:10:11.696 START TEST nvmf_queue_depth 00:10:11.696 ************************************ 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.696 * Looking for test storage... 00:10:11.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.696 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.955 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.956 --rc genhtml_branch_coverage=1 00:10:11.956 --rc genhtml_function_coverage=1 00:10:11.956 --rc genhtml_legend=1 00:10:11.956 --rc geninfo_all_blocks=1 00:10:11.956 --rc geninfo_unexecuted_blocks=1 00:10:11.956 00:10:11.956 ' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.956 --rc genhtml_branch_coverage=1 00:10:11.956 --rc genhtml_function_coverage=1 00:10:11.956 --rc genhtml_legend=1 00:10:11.956 --rc geninfo_all_blocks=1 00:10:11.956 --rc geninfo_unexecuted_blocks=1 00:10:11.956 00:10:11.956 ' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.956 --rc genhtml_branch_coverage=1 00:10:11.956 --rc genhtml_function_coverage=1 00:10:11.956 --rc genhtml_legend=1 00:10:11.956 --rc geninfo_all_blocks=1 00:10:11.956 --rc geninfo_unexecuted_blocks=1 00:10:11.956 00:10:11.956 ' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.956 --rc genhtml_branch_coverage=1 00:10:11.956 --rc genhtml_function_coverage=1 00:10:11.956 --rc genhtml_legend=1 00:10:11.956 --rc geninfo_all_blocks=1 00:10:11.956 --rc geninfo_unexecuted_blocks=1 00:10:11.956 00:10:11.956 ' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.956 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.957 Cannot find device "nvmf_init_br" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.957 Cannot find device "nvmf_init_br2" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.957 Cannot find device "nvmf_tgt_br" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.957 Cannot find device "nvmf_tgt_br2" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.957 Cannot find device "nvmf_init_br" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.957 Cannot find device "nvmf_init_br2" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.957 Cannot find device "nvmf_tgt_br" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.957 Cannot find device "nvmf_tgt_br2" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.957 Cannot find device "nvmf_br" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.957 Cannot find device "nvmf_init_if" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.957 Cannot find device "nvmf_init_if2" 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.957 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:12.216 00:10:12.216 --- 10.0.0.3 ping statistics --- 00:10:12.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.216 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.216 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.216 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:10:12.216 00:10:12.216 --- 10.0.0.4 ping statistics --- 00:10:12.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.216 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:12.216 00:10:12.216 --- 10.0.0.1 ping statistics --- 00:10:12.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.216 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:12.216 00:10:12.216 --- 10.0.0.2 ping statistics --- 00:10:12.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.216 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:12.216 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=80935 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 80935 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 80935 ']' 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.475 18:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.475 [2024-12-14 18:30:28.051425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:12.475 [2024-12-14 18:30:28.051524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.475 [2024-12-14 18:30:28.200415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.734 [2024-12-14 18:30:28.282284] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.734 [2024-12-14 18:30:28.282373] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.734 [2024-12-14 18:30:28.282388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.734 [2024-12-14 18:30:28.282399] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.734 [2024-12-14 18:30:28.282408] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.734 [2024-12-14 18:30:28.282446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.301 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.559 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 [2024-12-14 18:30:29.061976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 Malloc0 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 [2024-12-14 18:30:29.130889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=80986 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 80986 /var/tmp/bdevperf.sock 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 80986 ']' 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:13.560 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.560 [2024-12-14 18:30:29.202583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.560 [2024-12-14 18:30:29.202690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80986 ] 00:10:13.818 [2024-12-14 18:30:29.345448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.818 [2024-12-14 18:30:29.431010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.753 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.753 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:14.754 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:14.754 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.754 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.754 NVMe0n1 00:10:14.754 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.754 18:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:14.754 Running I/O for 10 seconds... 00:10:16.647 10250.00 IOPS, 40.04 MiB/s [2024-12-14T18:30:33.777Z] 10345.50 IOPS, 40.41 MiB/s [2024-12-14T18:30:34.713Z] 10245.67 IOPS, 40.02 MiB/s [2024-12-14T18:30:35.652Z] 10412.75 IOPS, 40.67 MiB/s [2024-12-14T18:30:36.588Z] 10475.00 IOPS, 40.92 MiB/s [2024-12-14T18:30:37.524Z] 10579.33 IOPS, 41.33 MiB/s [2024-12-14T18:30:38.460Z] 10666.71 IOPS, 41.67 MiB/s [2024-12-14T18:30:39.396Z] 10705.12 IOPS, 41.82 MiB/s [2024-12-14T18:30:40.773Z] 10720.78 IOPS, 41.88 MiB/s [2024-12-14T18:30:40.773Z] 10757.00 IOPS, 42.02 MiB/s 00:10:25.020 Latency(us) 00:10:25.020 [2024-12-14T18:30:40.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.020 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:25.020 Verification LBA range: start 0x0 length 0x4000 00:10:25.020 NVMe0n1 : 10.06 10798.83 42.18 0.00 0.00 94507.16 12988.04 98184.84 00:10:25.020 [2024-12-14T18:30:40.773Z] =================================================================================================================== 00:10:25.020 [2024-12-14T18:30:40.773Z] Total : 10798.83 42.18 0.00 0.00 94507.16 12988.04 98184.84 00:10:25.020 { 00:10:25.020 "results": [ 00:10:25.020 { 00:10:25.020 "job": "NVMe0n1", 00:10:25.020 "core_mask": "0x1", 00:10:25.020 "workload": "verify", 00:10:25.020 "status": "finished", 00:10:25.020 "verify_range": { 00:10:25.020 "start": 0, 00:10:25.020 "length": 16384 00:10:25.020 }, 00:10:25.020 "queue_depth": 1024, 00:10:25.020 "io_size": 4096, 00:10:25.020 "runtime": 10.056091, 00:10:25.020 "iops": 10798.828292226075, 00:10:25.020 "mibps": 42.182923016508106, 00:10:25.020 "io_failed": 0, 00:10:25.020 "io_timeout": 0, 00:10:25.020 "avg_latency_us": 94507.16099298972, 00:10:25.020 "min_latency_us": 12988.043636363636, 00:10:25.020 "max_latency_us": 98184.84363636364 00:10:25.020 } 00:10:25.020 ], 00:10:25.020 "core_count": 1 00:10:25.020 } 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 80986 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 80986 ']' 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 80986 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80986 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.020 killing process with pid 80986 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80986' 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 80986 00:10:25.020 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.020 00:10:25.020 Latency(us) 00:10:25.020 [2024-12-14T18:30:40.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.020 [2024-12-14T18:30:40.773Z] =================================================================================================================== 00:10:25.020 [2024-12-14T18:30:40.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 80986 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:25.020 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.279 rmmod nvme_tcp 00:10:25.279 rmmod nvme_fabrics 00:10:25.279 rmmod nvme_keyring 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 80935 ']' 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 80935 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 80935 ']' 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 80935 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80935 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:25.279 killing process with pid 80935 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80935' 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 80935 00:10:25.279 18:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 80935 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:25.538 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:25.797 00:10:25.797 real 0m14.071s 00:10:25.797 user 0m23.637s 00:10:25.797 sys 0m2.289s 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.797 ************************************ 00:10:25.797 END TEST nvmf_queue_depth 00:10:25.797 ************************************ 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.797 ************************************ 00:10:25.797 START TEST nvmf_target_multipath 00:10:25.797 ************************************ 00:10:25.797 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:26.057 * Looking for test storage... 00:10:26.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:26.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.057 --rc genhtml_branch_coverage=1 00:10:26.057 --rc genhtml_function_coverage=1 00:10:26.057 --rc genhtml_legend=1 00:10:26.057 --rc geninfo_all_blocks=1 00:10:26.057 --rc geninfo_unexecuted_blocks=1 00:10:26.057 00:10:26.057 ' 00:10:26.057 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:26.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.057 --rc genhtml_branch_coverage=1 00:10:26.058 --rc genhtml_function_coverage=1 00:10:26.058 --rc genhtml_legend=1 00:10:26.058 --rc geninfo_all_blocks=1 00:10:26.058 --rc geninfo_unexecuted_blocks=1 00:10:26.058 00:10:26.058 ' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.058 --rc genhtml_branch_coverage=1 00:10:26.058 --rc genhtml_function_coverage=1 00:10:26.058 --rc genhtml_legend=1 00:10:26.058 --rc geninfo_all_blocks=1 00:10:26.058 --rc geninfo_unexecuted_blocks=1 00:10:26.058 00:10:26.058 ' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.058 --rc genhtml_branch_coverage=1 00:10:26.058 --rc genhtml_function_coverage=1 00:10:26.058 --rc genhtml_legend=1 00:10:26.058 --rc geninfo_all_blocks=1 00:10:26.058 --rc geninfo_unexecuted_blocks=1 00:10:26.058 00:10:26.058 ' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.058 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:26.058 Cannot find device "nvmf_init_br" 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:26.058 Cannot find device "nvmf_init_br2" 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:26.058 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:26.058 Cannot find device "nvmf_tgt_br" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:26.059 Cannot find device "nvmf_tgt_br2" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:26.059 Cannot find device "nvmf_init_br" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:26.059 Cannot find device "nvmf_init_br2" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:26.059 Cannot find device "nvmf_tgt_br" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:26.059 Cannot find device "nvmf_tgt_br2" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:26.059 Cannot find device "nvmf_br" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:26.059 Cannot find device "nvmf_init_if" 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:26.059 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:26.318 Cannot find device "nvmf_init_if2" 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:26.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:26.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:26.318 18:30:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:26.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:26.318 00:10:26.318 --- 10.0.0.3 ping statistics --- 00:10:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.318 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:26.318 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:26.318 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:10:26.318 00:10:26.318 --- 10.0.0.4 ping statistics --- 00:10:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.318 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:26.318 00:10:26.318 --- 10.0.0.1 ping statistics --- 00:10:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.318 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:26.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:26.318 00:10:26.318 --- 10.0.0.2 ping statistics --- 00:10:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.318 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:26.318 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.319 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=81381 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 81381 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 81381 ']' 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.577 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:26.577 [2024-12-14 18:30:42.127392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:26.577 [2024-12-14 18:30:42.127458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.577 [2024-12-14 18:30:42.262115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.836 [2024-12-14 18:30:42.343653] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.836 [2024-12-14 18:30:42.343727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.836 [2024-12-14 18:30:42.343742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.836 [2024-12-14 18:30:42.343752] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.836 [2024-12-14 18:30:42.343761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.836 [2024-12-14 18:30:42.343914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.836 [2024-12-14 18:30:42.344077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.836 [2024-12-14 18:30:42.344751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.836 [2024-12-14 18:30:42.344765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.836 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.407 [2024-12-14 18:30:42.854776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.407 18:30:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:27.665 Malloc0 00:10:27.665 18:30:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:27.924 18:30:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.182 18:30:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:28.441 [2024-12-14 18:30:44.079273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:28.441 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:28.700 [2024-12-14 18:30:44.315456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:28.700 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:28.958 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:29.217 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.217 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:29.217 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.217 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:29.217 18:30:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:31.123 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=81505 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:31.124 18:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:31.124 [global] 00:10:31.124 thread=1 00:10:31.124 invalidate=1 00:10:31.124 rw=randrw 00:10:31.124 time_based=1 00:10:31.124 runtime=6 00:10:31.124 ioengine=libaio 00:10:31.124 direct=1 00:10:31.124 bs=4096 00:10:31.124 iodepth=128 00:10:31.124 norandommap=0 00:10:31.124 numjobs=1 00:10:31.124 00:10:31.124 verify_dump=1 00:10:31.124 verify_backlog=512 00:10:31.124 verify_state_save=0 00:10:31.124 do_verify=1 00:10:31.124 verify=crc32c-intel 00:10:31.124 [job0] 00:10:31.124 filename=/dev/nvme0n1 00:10:31.124 Could not set queue depth (nvme0n1) 00:10:31.382 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.382 fio-3.35 00:10:31.382 Starting 1 thread 00:10:32.317 18:30:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:32.576 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:32.835 18:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:33.770 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:33.770 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:33.770 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:33.770 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:34.029 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.287 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.288 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:35.224 18:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:35.224 18:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.224 18:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:35.224 18:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 81505 00:10:37.755 00:10:37.755 job0: (groupid=0, jobs=1): err= 0: pid=81532: Sat Dec 14 18:30:53 2024 00:10:37.755 read: IOPS=10.8k, BW=42.0MiB/s (44.1MB/s)(252MiB/6005msec) 00:10:37.755 slat (usec): min=4, max=5531, avg=52.46, stdev=234.94 00:10:37.755 clat (usec): min=2915, max=20385, avg=8040.99, stdev=1343.41 00:10:37.755 lat (usec): min=2925, max=20473, avg=8093.45, stdev=1353.55 00:10:37.755 clat percentiles (usec): 00:10:37.755 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7177], 00:10:37.755 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:10:37.755 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10421], 00:10:37.755 | 99.00th=[12125], 99.50th=[12780], 99.90th=[16581], 99.95th=[17695], 00:10:37.755 | 99.99th=[19006] 00:10:37.755 bw ( KiB/s): min=11792, max=27776, per=53.29%, avg=22938.91, stdev=5006.32, samples=11 00:10:37.755 iops : min= 2948, max= 6944, avg=5734.64, stdev=1251.61, samples=11 00:10:37.755 write: IOPS=6315, BW=24.7MiB/s (25.9MB/s)(136MiB/5511msec); 0 zone resets 00:10:37.755 slat (usec): min=15, max=3545, avg=65.00, stdev=166.14 00:10:37.755 clat (usec): min=584, max=18409, avg=6948.90, stdev=1191.93 00:10:37.755 lat (usec): min=640, max=18440, avg=7013.90, stdev=1196.69 00:10:37.755 clat percentiles (usec): 00:10:37.755 | 1.00th=[ 3884], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6259], 00:10:37.755 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:10:37.755 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 8029], 95.00th=[ 8717], 00:10:37.755 | 99.00th=[10945], 99.50th=[11863], 99.90th=[15664], 99.95th=[16319], 00:10:37.755 | 99.99th=[17957] 00:10:37.755 bw ( KiB/s): min=12320, max=27200, per=90.73%, avg=22921.18, stdev=4662.32, samples=11 00:10:37.755 iops : min= 3080, max= 6800, avg=5730.18, stdev=1165.65, samples=11 00:10:37.755 lat (usec) : 750=0.01% 00:10:37.755 lat (msec) : 2=0.01%, 4=0.58%, 10=94.35%, 20=5.06%, 50=0.01% 00:10:37.755 cpu : usr=5.56%, sys=23.48%, ctx=6310, majf=0, minf=78 00:10:37.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:37.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.755 issued rwts: total=64614,34805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.755 00:10:37.755 Run status group 0 (all jobs): 00:10:37.755 READ: bw=42.0MiB/s (44.1MB/s), 42.0MiB/s-42.0MiB/s (44.1MB/s-44.1MB/s), io=252MiB (265MB), run=6005-6005msec 00:10:37.755 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=136MiB (143MB), run=5511-5511msec 00:10:37.755 00:10:37.755 Disk stats (read/write): 00:10:37.755 nvme0n1: ios=63793/33894, merge=0/0, ticks=481382/220337, in_queue=701719, util=98.66% 00:10:37.755 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:38.013 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:38.271 18:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=81664 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:39.205 18:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:39.205 [global] 00:10:39.205 thread=1 00:10:39.205 invalidate=1 00:10:39.205 rw=randrw 00:10:39.205 time_based=1 00:10:39.205 runtime=6 00:10:39.205 ioengine=libaio 00:10:39.205 direct=1 00:10:39.205 bs=4096 00:10:39.205 iodepth=128 00:10:39.205 norandommap=0 00:10:39.205 numjobs=1 00:10:39.205 00:10:39.205 verify_dump=1 00:10:39.205 verify_backlog=512 00:10:39.205 verify_state_save=0 00:10:39.205 do_verify=1 00:10:39.205 verify=crc32c-intel 00:10:39.205 [job0] 00:10:39.205 filename=/dev/nvme0n1 00:10:39.205 Could not set queue depth (nvme0n1) 00:10:39.205 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.205 fio-3.35 00:10:39.205 Starting 1 thread 00:10:40.137 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:40.396 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:40.654 18:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:42.028 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:42.028 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.028 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:42.028 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:42.028 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:42.287 18:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:43.256 18:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:43.256 18:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.257 18:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:43.257 18:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 81664 00:10:45.787 00:10:45.787 job0: (groupid=0, jobs=1): err= 0: pid=81685: Sat Dec 14 18:31:01 2024 00:10:45.787 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(282MiB/6004msec) 00:10:45.787 slat (usec): min=2, max=7662, avg=41.95, stdev=207.96 00:10:45.787 clat (usec): min=399, max=18246, avg=7303.39, stdev=1596.16 00:10:45.787 lat (usec): min=425, max=18258, avg=7345.34, stdev=1606.79 00:10:45.787 clat percentiles (usec): 00:10:45.787 | 1.00th=[ 3359], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 6325], 00:10:45.787 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7570], 00:10:45.787 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9896], 00:10:45.787 | 99.00th=[11994], 99.50th=[13042], 99.90th=[15008], 99.95th=[16319], 00:10:45.787 | 99.99th=[17695] 00:10:45.787 bw ( KiB/s): min= 9200, max=36894, per=53.51%, avg=25772.27, stdev=7843.62, samples=11 00:10:45.787 iops : min= 2300, max= 9223, avg=6443.00, stdev=1960.87, samples=11 00:10:45.787 write: IOPS=7257, BW=28.3MiB/s (29.7MB/s)(150MiB/5291msec); 0 zone resets 00:10:45.787 slat (usec): min=3, max=6908, avg=52.64, stdev=134.19 00:10:45.787 clat (usec): min=737, max=15948, avg=6093.45, stdev=1403.79 00:10:45.787 lat (usec): min=797, max=15973, avg=6146.10, stdev=1412.17 00:10:45.787 clat percentiles (usec): 00:10:45.787 | 1.00th=[ 2868], 5.00th=[ 3589], 10.00th=[ 4047], 20.00th=[ 4883], 00:10:45.787 | 30.00th=[ 5669], 40.00th=[ 6063], 50.00th=[ 6325], 60.00th=[ 6587], 00:10:45.787 | 70.00th=[ 6783], 80.00th=[ 7046], 90.00th=[ 7504], 95.00th=[ 7963], 00:10:45.787 | 99.00th=[10028], 99.50th=[10945], 99.90th=[12911], 99.95th=[13960], 00:10:45.787 | 99.99th=[15401] 00:10:45.787 bw ( KiB/s): min= 9640, max=37804, per=88.75%, avg=25763.36, stdev=7740.34, samples=11 00:10:45.787 iops : min= 2410, max= 9451, avg=6440.82, stdev=1935.12, samples=11 00:10:45.787 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:45.787 lat (msec) : 2=0.15%, 4=4.44%, 10=91.98%, 20=3.41% 00:10:45.787 cpu : usr=6.11%, sys=24.49%, ctx=7215, majf=0, minf=114 00:10:45.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:45.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.787 issued rwts: total=72299,38397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.787 00:10:45.787 Run status group 0 (all jobs): 00:10:45.787 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=282MiB (296MB), run=6004-6004msec 00:10:45.787 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=150MiB (157MB), run=5291-5291msec 00:10:45.787 00:10:45.787 Disk stats (read/write): 00:10:45.787 nvme0n1: ios=70640/38397, merge=0/0, ticks=483852/217079, in_queue=700931, util=98.73% 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:45.787 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.045 rmmod nvme_tcp 00:10:46.045 rmmod nvme_fabrics 00:10:46.045 rmmod nvme_keyring 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 81381 ']' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 81381 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 81381 ']' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 81381 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81381 00:10:46.045 killing process with pid 81381 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81381' 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 81381 00:10:46.045 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 81381 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:46.304 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:46.304 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:46.563 ************************************ 00:10:46.563 END TEST nvmf_target_multipath 00:10:46.563 ************************************ 00:10:46.563 00:10:46.563 real 0m20.727s 00:10:46.563 user 1m20.265s 00:10:46.563 sys 0m6.740s 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.563 ************************************ 00:10:46.563 START TEST nvmf_zcopy 00:10:46.563 ************************************ 00:10:46.563 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.823 * Looking for test storage... 00:10:46.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.823 --rc genhtml_branch_coverage=1 00:10:46.823 --rc genhtml_function_coverage=1 00:10:46.823 --rc genhtml_legend=1 00:10:46.823 --rc geninfo_all_blocks=1 00:10:46.823 --rc geninfo_unexecuted_blocks=1 00:10:46.823 00:10:46.823 ' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.823 --rc genhtml_branch_coverage=1 00:10:46.823 --rc genhtml_function_coverage=1 00:10:46.823 --rc genhtml_legend=1 00:10:46.823 --rc geninfo_all_blocks=1 00:10:46.823 --rc geninfo_unexecuted_blocks=1 00:10:46.823 00:10:46.823 ' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.823 --rc genhtml_branch_coverage=1 00:10:46.823 --rc genhtml_function_coverage=1 00:10:46.823 --rc genhtml_legend=1 00:10:46.823 --rc geninfo_all_blocks=1 00:10:46.823 --rc geninfo_unexecuted_blocks=1 00:10:46.823 00:10:46.823 ' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.823 --rc genhtml_branch_coverage=1 00:10:46.823 --rc genhtml_function_coverage=1 00:10:46.823 --rc genhtml_legend=1 00:10:46.823 --rc geninfo_all_blocks=1 00:10:46.823 --rc geninfo_unexecuted_blocks=1 00:10:46.823 00:10:46.823 ' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.823 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:46.823 Cannot find device "nvmf_init_br" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:46.823 Cannot find device "nvmf_init_br2" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:46.823 Cannot find device "nvmf_tgt_br" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.823 Cannot find device "nvmf_tgt_br2" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:46.823 Cannot find device "nvmf_init_br" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:46.823 Cannot find device "nvmf_init_br2" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:46.823 Cannot find device "nvmf_tgt_br" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:46.823 Cannot find device "nvmf_tgt_br2" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:46.823 Cannot find device "nvmf_br" 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:46.823 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:47.082 Cannot find device "nvmf_init_if" 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:47.082 Cannot find device "nvmf_init_if2" 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:47.082 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:47.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:47.341 00:10:47.341 --- 10.0.0.3 ping statistics --- 00:10:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.341 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:47.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:47.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:47.341 00:10:47.341 --- 10.0.0.4 ping statistics --- 00:10:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.341 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:47.341 00:10:47.341 --- 10.0.0.1 ping statistics --- 00:10:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.341 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:47.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:47.341 00:10:47.341 --- 10.0.0.2 ping statistics --- 00:10:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.341 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=82020 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 82020 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 82020 ']' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.341 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.341 [2024-12-14 18:31:02.952148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:47.341 [2024-12-14 18:31:02.952240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.601 [2024-12-14 18:31:03.093354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.601 [2024-12-14 18:31:03.166300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.601 [2024-12-14 18:31:03.166386] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.601 [2024-12-14 18:31:03.166397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.601 [2024-12-14 18:31:03.166404] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.601 [2024-12-14 18:31:03.166411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.601 [2024-12-14 18:31:03.166441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.536 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 [2024-12-14 18:31:03.979314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 [2024-12-14 18:31:03.995461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.537 18:31:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 malloc0 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:48.537 { 00:10:48.537 "params": { 00:10:48.537 "name": "Nvme$subsystem", 00:10:48.537 "trtype": "$TEST_TRANSPORT", 00:10:48.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.537 "adrfam": "ipv4", 00:10:48.537 "trsvcid": "$NVMF_PORT", 00:10:48.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.537 "hdgst": ${hdgst:-false}, 00:10:48.537 "ddgst": ${ddgst:-false} 00:10:48.537 }, 00:10:48.537 "method": "bdev_nvme_attach_controller" 00:10:48.537 } 00:10:48.537 EOF 00:10:48.537 )") 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:48.537 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:48.537 "params": { 00:10:48.537 "name": "Nvme1", 00:10:48.537 "trtype": "tcp", 00:10:48.537 "traddr": "10.0.0.3", 00:10:48.537 "adrfam": "ipv4", 00:10:48.537 "trsvcid": "4420", 00:10:48.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.537 "hdgst": false, 00:10:48.537 "ddgst": false 00:10:48.537 }, 00:10:48.537 "method": "bdev_nvme_attach_controller" 00:10:48.537 }' 00:10:48.537 [2024-12-14 18:31:04.116867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:48.537 [2024-12-14 18:31:04.116971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82071 ] 00:10:48.537 [2024-12-14 18:31:04.258951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.795 [2024-12-14 18:31:04.341303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.053 Running I/O for 10 seconds... 00:10:50.923 7349.00 IOPS, 57.41 MiB/s [2024-12-14T18:31:07.611Z] 7442.00 IOPS, 58.14 MiB/s [2024-12-14T18:31:08.994Z] 7459.00 IOPS, 58.27 MiB/s [2024-12-14T18:31:09.561Z] 7504.75 IOPS, 58.63 MiB/s [2024-12-14T18:31:10.938Z] 7528.40 IOPS, 58.82 MiB/s [2024-12-14T18:31:11.874Z] 7526.00 IOPS, 58.80 MiB/s [2024-12-14T18:31:12.809Z] 7535.86 IOPS, 58.87 MiB/s [2024-12-14T18:31:13.745Z] 7545.88 IOPS, 58.95 MiB/s [2024-12-14T18:31:14.685Z] 7544.67 IOPS, 58.94 MiB/s [2024-12-14T18:31:14.685Z] 7549.30 IOPS, 58.98 MiB/s 00:10:58.933 Latency(us) 00:10:58.933 [2024-12-14T18:31:14.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.933 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:58.933 Verification LBA range: start 0x0 length 0x1000 00:10:58.933 Nvme1n1 : 10.01 7549.02 58.98 0.00 0.00 16905.88 2353.34 27405.96 00:10:58.933 [2024-12-14T18:31:14.686Z] =================================================================================================================== 00:10:58.933 [2024-12-14T18:31:14.686Z] Total : 7549.02 58.98 0.00 0.00 16905.88 2353.34 27405.96 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=82195 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:59.192 { 00:10:59.192 "params": { 00:10:59.192 "name": "Nvme$subsystem", 00:10:59.192 "trtype": "$TEST_TRANSPORT", 00:10:59.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.192 "adrfam": "ipv4", 00:10:59.192 "trsvcid": "$NVMF_PORT", 00:10:59.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.192 "hdgst": ${hdgst:-false}, 00:10:59.192 "ddgst": ${ddgst:-false} 00:10:59.192 }, 00:10:59.192 "method": "bdev_nvme_attach_controller" 00:10:59.192 } 00:10:59.192 EOF 00:10:59.192 )") 00:10:59.192 [2024-12-14 18:31:14.827484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.827528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:59.192 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:59.192 "params": { 00:10:59.192 "name": "Nvme1", 00:10:59.192 "trtype": "tcp", 00:10:59.192 "traddr": "10.0.0.3", 00:10:59.192 "adrfam": "ipv4", 00:10:59.192 "trsvcid": "4420", 00:10:59.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.192 "hdgst": false, 00:10:59.192 "ddgst": false 00:10:59.192 }, 00:10:59.192 "method": "bdev_nvme_attach_controller" 00:10:59.192 }' 00:10:59.192 [2024-12-14 18:31:14.839417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.839441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.851413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.851435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.863413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.863434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 [2024-12-14 18:31:14.866930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:59.192 [2024-12-14 18:31:14.866994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82195 ] 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.875416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.875437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.887418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.887439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.899420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.899441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.911425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.911447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.923426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.923448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.192 [2024-12-14 18:31:14.935425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.192 [2024-12-14 18:31:14.935446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.192 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.451 [2024-12-14 18:31:14.947426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.451 [2024-12-14 18:31:14.947446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:14.959426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:14.959447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:14.971429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:14.971449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:14.983433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:14.983453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:14.991429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:14.991449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 [2024-12-14 18:31:14.991447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.452 2024/12/14 18:31:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.003461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.003487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.015448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.015471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.027477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.027504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.039453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.039478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.047461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.047499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.059449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.059470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 [2024-12-14 18:31:15.063432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.071452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.071473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.083492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.083523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.095481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.095526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.103464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.103487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.111469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.111491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.119469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.119490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.127471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.127499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.139480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.139505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.151482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.151504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.163493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.163535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.175493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.175520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.187494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.187516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.452 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.452 [2024-12-14 18:31:15.199514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.452 [2024-12-14 18:31:15.199543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.211515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.211540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.223517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.223541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.235518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.235542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.247521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.247545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.259532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.259559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 Running I/O for 5 seconds... 00:10:59.712 [2024-12-14 18:31:15.271526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.271548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.288375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.288404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.304482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.304511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.316032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.316059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.332169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.332197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.348438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.348466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.364291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.364319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.379340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.379368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.396032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.396060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.412352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.412379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.429233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.429261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.444972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.444999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.712 [2024-12-14 18:31:15.459256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.712 [2024-12-14 18:31:15.459283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.712 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.473964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.473992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.484908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.484935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.501349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.501376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.517610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.517636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.533192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.533219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.547301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.547329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.559237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.559265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.575467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.575495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.591158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.591185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.603675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.603703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.614524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.614551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.630093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.971 [2024-12-14 18:31:15.630131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.971 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.971 [2024-12-14 18:31:15.645748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.972 [2024-12-14 18:31:15.645785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.972 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.972 [2024-12-14 18:31:15.661255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.972 [2024-12-14 18:31:15.661283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.972 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.972 [2024-12-14 18:31:15.678291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.972 [2024-12-14 18:31:15.678319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.972 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.972 [2024-12-14 18:31:15.694670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.972 [2024-12-14 18:31:15.694698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.972 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.972 [2024-12-14 18:31:15.710401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.972 [2024-12-14 18:31:15.710429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.972 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.726358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.726386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.741861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.741889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.758184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.758211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.774299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.774328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.790259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.790286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.806202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.806229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.819991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.820019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.835395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.835423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.851392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.851420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.867304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.867331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.882787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.882815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.897674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.897701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.913577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.913615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.927664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.927691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.943409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.943437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.958940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.958967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.231 [2024-12-14 18:31:15.975247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.231 [2024-12-14 18:31:15.975275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.231 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:15.991042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:15.991069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.005615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.005641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.017204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.017231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.033067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.033095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.048538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.048565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.063281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.063309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.079003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.079031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.093197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.093224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.104199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.104226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.120819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.120847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.137012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.137040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.153022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.153050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.169009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.169045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.180780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.180808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.196748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.196775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.212346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.212373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.491 [2024-12-14 18:31:16.226716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.491 [2024-12-14 18:31:16.226742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.491 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.241830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.241858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.258033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.258063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 14552.00 IOPS, 113.69 MiB/s [2024-12-14T18:31:16.504Z] [2024-12-14 18:31:16.273451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.273480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.287558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.287609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.302443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.302471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.318641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.318667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.335076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.335104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.351394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.351422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.367183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.367210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.381945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.381972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.751 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.751 [2024-12-14 18:31:16.394335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.751 [2024-12-14 18:31:16.394363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.410124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.410152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.426371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.426398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.442789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.442816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.458894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.458921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.473981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.474009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.489897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.489925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.752 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.752 [2024-12-14 18:31:16.501396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.752 [2024-12-14 18:31:16.501424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.517374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.517402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.532898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.532926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.545151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.545178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.556623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.556649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.572508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.572535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.588592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.588642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.604722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.604760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.615917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.615944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.631735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.631761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.647695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.647721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.659261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.659288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.675372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.675400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.691926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.691964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.707739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.707767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.011 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.011 [2024-12-14 18:31:16.724006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.011 [2024-12-14 18:31:16.724033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.012 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.012 [2024-12-14 18:31:16.735881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.012 [2024-12-14 18:31:16.735908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.012 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.012 [2024-12-14 18:31:16.751003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.012 [2024-12-14 18:31:16.751029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.012 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.766929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.766956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.778478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.778505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.793710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.793738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.809548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.809582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.826510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.826538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.842995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.843023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-12-14 18:31:16.858783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-12-14 18:31:16.858810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.875518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.875546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.891668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.891695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.907922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.907950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.919436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.919466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.934940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.934968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.952038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.952065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.967724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.967751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.979123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.979150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:16.994811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:16.994838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.272 [2024-12-14 18:31:17.011802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.272 [2024-12-14 18:31:17.011829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.272 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.028242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.028270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.044334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.044362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.059209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.059237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.075342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.075372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.091489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.091517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.105520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.105546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.120209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.120236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.134738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.134765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.146780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.146808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.162720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.162747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.178713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.178741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.194949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.194993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.211456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.211504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.228190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.228222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.245301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.245331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 [2024-12-14 18:31:17.261527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.261557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.532 14615.00 IOPS, 114.18 MiB/s [2024-12-14T18:31:17.285Z] [2024-12-14 18:31:17.277572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.532 [2024-12-14 18:31:17.277623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.532 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.291130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.291158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.306852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.306880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.323210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.323238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.338470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.338499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.353376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.353404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.369468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.369497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.385717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.385744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.397352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.397381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.412493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.412522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.428674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.428702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.440713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.440741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.456939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.456968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.792 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.792 [2024-12-14 18:31:17.472756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.792 [2024-12-14 18:31:17.472784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-12-14 18:31:17.489090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-12-14 18:31:17.489119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-12-14 18:31:17.505125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-12-14 18:31:17.505154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-12-14 18:31:17.516584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-12-14 18:31:17.516620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-12-14 18:31:17.533073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-12-14 18:31:17.533101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.548099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.548127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.562862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.562890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.574664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.574692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.591137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.591165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.606335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.606363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.617381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.617410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.633902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.633930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.649624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.649652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.665853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.665891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.677655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.677683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.692959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.692987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.709838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.709867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.726509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.726537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.742717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.742744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.756747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.756775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.771753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.771780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-12-14 18:31:17.787922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-12-14 18:31:17.787951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.802831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.802858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.815379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.815406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.827187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.827215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.842240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.842268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.858437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.858465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.871813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.871841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.888221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.888249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.904565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.904592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.920726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.920752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.936981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.937008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.948740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.948766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.965057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.965085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.342 [2024-12-14 18:31:17.980834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.342 [2024-12-14 18:31:17.980862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.342 2024/12/14 18:31:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:17.997718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:17.997744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.343 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:18.013874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:18.013912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.343 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:18.029858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:18.029886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.343 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:18.041433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:18.041460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.343 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:18.056730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:18.056757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.343 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.343 [2024-12-14 18:31:18.072737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.343 [2024-12-14 18:31:18.072765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.609 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.609 [2024-12-14 18:31:18.089032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.609 [2024-12-14 18:31:18.089059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.609 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.609 [2024-12-14 18:31:18.105828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.609 [2024-12-14 18:31:18.105856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.609 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.609 [2024-12-14 18:31:18.121783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.121810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.137800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.137827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.149432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.149459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.164162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.164190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.175934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.175962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.192018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.192045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.207916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.207943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.224177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.224211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.239894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.239921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.254631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.254658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.270518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.270555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 14591.33 IOPS, 113.99 MiB/s [2024-12-14T18:31:18.363Z] 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.285859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.285888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.302573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.302622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.319765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.319804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.336281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.336310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.610 [2024-12-14 18:31:18.352200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.610 [2024-12-14 18:31:18.352240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.610 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.869 [2024-12-14 18:31:18.364463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.869 [2024-12-14 18:31:18.364494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.869 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.869 [2024-12-14 18:31:18.379539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.869 [2024-12-14 18:31:18.379567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.869 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.869 [2024-12-14 18:31:18.395660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.869 [2024-12-14 18:31:18.395698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.869 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.869 [2024-12-14 18:31:18.407592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.869 [2024-12-14 18:31:18.407631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.869 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.869 [2024-12-14 18:31:18.422004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.869 [2024-12-14 18:31:18.422033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.437040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.437068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.453134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.453161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.469250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.469277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.480552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.480591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.496576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.496616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.512475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.512503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.528865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.528892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.544692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.544719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.561037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.561066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.577537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.577564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.593348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.593375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.870 [2024-12-14 18:31:18.607937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.870 [2024-12-14 18:31:18.607965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.870 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.624444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.624471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.640466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.640493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.655617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.655644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.672694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.672722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.688676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.688705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.699737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.699764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.715814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.715841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.732941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.732970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.749351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.749379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.766162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.766190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.782442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.782469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.799135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.799162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.129 [2024-12-14 18:31:18.814755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.129 [2024-12-14 18:31:18.814782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.129 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.130 [2024-12-14 18:31:18.830630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.130 [2024-12-14 18:31:18.830656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.130 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.130 [2024-12-14 18:31:18.844976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.130 [2024-12-14 18:31:18.845004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.130 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.130 [2024-12-14 18:31:18.860105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.130 [2024-12-14 18:31:18.860133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.130 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.130 [2024-12-14 18:31:18.876399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.130 [2024-12-14 18:31:18.876426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.892126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.892153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.906926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.906955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.918666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.918692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.935015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.935042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.950891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.950919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.967088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.967115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.982863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.982892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:18.997814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:18.997843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.389 2024/12/14 18:31:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.389 [2024-12-14 18:31:19.009067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.389 [2024-12-14 18:31:19.009094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.025185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.025212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.041902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.041929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.057825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.057852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.073521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.073548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.088690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.088716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.104995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.105022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.120857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.120885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.390 [2024-12-14 18:31:19.135009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.390 [2024-12-14 18:31:19.135037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.390 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.147165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.147192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.162740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.162773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.178694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.178725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.194831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.194863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.209963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.209994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.226213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.226245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.242133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.242171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.258074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.258106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.272169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.272201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 14551.25 IOPS, 113.68 MiB/s [2024-12-14T18:31:19.402Z] 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.287021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.287053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.298579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.649 [2024-12-14 18:31:19.298624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.649 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.649 [2024-12-14 18:31:19.314938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.314970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.650 [2024-12-14 18:31:19.331079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.331112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.650 [2024-12-14 18:31:19.347327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.347360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.650 [2024-12-14 18:31:19.363336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.363368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.650 [2024-12-14 18:31:19.374875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.374906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.650 [2024-12-14 18:31:19.391105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.650 [2024-12-14 18:31:19.391147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.650 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.407040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.407072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.423356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.423388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.439366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.439398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.454744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.454776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.469839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.469871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.485959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.485993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.502747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.502779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.519556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.519589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.535165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.535198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.909 [2024-12-14 18:31:19.549911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.909 [2024-12-14 18:31:19.549942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.909 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.566245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.566277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.582053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.582085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.596893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.597081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.613204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.613238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.629531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.629563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.645237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.645269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.910 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.910 [2024-12-14 18:31:19.659237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.910 [2024-12-14 18:31:19.659269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.675082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.675116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.690340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.690387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.704286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.704468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.719826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.719863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.735576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.735765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.750149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.750180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.766182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.766212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.782232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.782263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.797900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.797933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.811956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.811987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.823339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.823370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.839441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.839472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.854580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.854623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.169 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.169 [2024-12-14 18:31:19.869161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.169 [2024-12-14 18:31:19.869192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.170 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.170 [2024-12-14 18:31:19.885744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.170 [2024-12-14 18:31:19.885791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.170 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.170 [2024-12-14 18:31:19.901936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.170 [2024-12-14 18:31:19.901967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.170 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.170 [2024-12-14 18:31:19.917975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.170 [2024-12-14 18:31:19.918006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.170 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:19.929808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:19.929839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:19.945676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:19.945706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:19.961579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:19.961624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:19.977404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:19.977435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:19.988923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:19.988954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.006393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.006426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.019741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.019772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.036084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.036116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.053468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.053507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.063967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.064008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.080552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.080583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.096771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.096801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.113216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.113248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.129534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.129564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.145883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.145915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.161280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.161311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.429 [2024-12-14 18:31:20.175431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.429 [2024-12-14 18:31:20.175462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.429 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.187878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.187909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.202897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.202939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.218954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.218984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.235845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.235877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.252079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.252110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 [2024-12-14 18:31:20.264333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.264364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.688 14570.80 IOPS, 113.83 MiB/s [2024-12-14T18:31:20.441Z] [2024-12-14 18:31:20.279263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.688 [2024-12-14 18:31:20.279294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.688 00:11:04.688 Latency(us) 00:11:04.688 [2024-12-14T18:31:20.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.688 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:04.688 Nvme1n1 : 5.01 14570.79 113.83 0.00 0.00 8774.32 3217.22 19303.33 00:11:04.689 [2024-12-14T18:31:20.442Z] =================================================================================================================== 00:11:04.689 [2024-12-14T18:31:20.442Z] Total : 14570.79 113.83 0.00 0.00 8774.32 3217.22 19303.33 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.291205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.291233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.303199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.303228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.315209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.315241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.327213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.327244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.339208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.339237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.351253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.351289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.363221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.363255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.375238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.375272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.387232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.387270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.399227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.399260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.411230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.411263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.423238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.423271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.689 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.689 [2024-12-14 18:31:20.435233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.689 [2024-12-14 18:31:20.435262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.447239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.447268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.459234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.459261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.471232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.471257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.483270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.483309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.495314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.495349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.507260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.507284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.519283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.519317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.531306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.531334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 [2024-12-14 18:31:20.543264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.948 [2024-12-14 18:31:20.543289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.948 2024/12/14 18:31:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.948 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (82195) - No such process 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 82195 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.948 delay0 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.948 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:05.207 [2024-12-14 18:31:20.723363] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:11.775 Initializing NVMe Controllers 00:11:11.775 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.775 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.775 Initialization complete. Launching workers. 00:11:11.775 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 00:11:11.776 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 00:11:11.776 success 155, unsuccessful 200, failed 0 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.776 rmmod nvme_tcp 00:11:11.776 rmmod nvme_fabrics 00:11:11.776 rmmod nvme_keyring 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 82020 ']' 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 82020 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 82020 ']' 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 82020 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82020 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:11.776 killing process with pid 82020 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82020' 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 82020 00:11:11.776 18:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 82020 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:11.776 ************************************ 00:11:11.776 END TEST nvmf_zcopy 00:11:11.776 ************************************ 00:11:11.776 00:11:11.776 real 0m25.196s 00:11:11.776 user 0m40.236s 00:11:11.776 sys 0m6.963s 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.776 ************************************ 00:11:11.776 START TEST nvmf_nmic 00:11:11.776 ************************************ 00:11:11.776 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.035 * Looking for test storage... 00:11:12.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.035 --rc genhtml_branch_coverage=1 00:11:12.035 --rc genhtml_function_coverage=1 00:11:12.035 --rc genhtml_legend=1 00:11:12.035 --rc geninfo_all_blocks=1 00:11:12.035 --rc geninfo_unexecuted_blocks=1 00:11:12.035 00:11:12.035 ' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.035 --rc genhtml_branch_coverage=1 00:11:12.035 --rc genhtml_function_coverage=1 00:11:12.035 --rc genhtml_legend=1 00:11:12.035 --rc geninfo_all_blocks=1 00:11:12.035 --rc geninfo_unexecuted_blocks=1 00:11:12.035 00:11:12.035 ' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.035 --rc genhtml_branch_coverage=1 00:11:12.035 --rc genhtml_function_coverage=1 00:11:12.035 --rc genhtml_legend=1 00:11:12.035 --rc geninfo_all_blocks=1 00:11:12.035 --rc geninfo_unexecuted_blocks=1 00:11:12.035 00:11:12.035 ' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:12.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.035 --rc genhtml_branch_coverage=1 00:11:12.035 --rc genhtml_function_coverage=1 00:11:12.035 --rc genhtml_legend=1 00:11:12.035 --rc geninfo_all_blocks=1 00:11:12.035 --rc geninfo_unexecuted_blocks=1 00:11:12.035 00:11:12.035 ' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.035 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:12.036 Cannot find device "nvmf_init_br" 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:12.036 Cannot find device "nvmf_init_br2" 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:12.036 Cannot find device "nvmf_tgt_br" 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:12.036 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.295 Cannot find device "nvmf_tgt_br2" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:12.295 Cannot find device "nvmf_init_br" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:12.295 Cannot find device "nvmf_init_br2" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:12.295 Cannot find device "nvmf_tgt_br" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:12.295 Cannot find device "nvmf_tgt_br2" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:12.295 Cannot find device "nvmf_br" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:12.295 Cannot find device "nvmf_init_if" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:12.295 Cannot find device "nvmf_init_if2" 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.295 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:12.295 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:12.295 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.295 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:12.295 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:12.553 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:12.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:11:12.554 00:11:12.554 --- 10.0.0.3 ping statistics --- 00:11:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.554 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:12.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:12.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:12.554 00:11:12.554 --- 10.0.0.4 ping statistics --- 00:11:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.554 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:12.554 00:11:12.554 --- 10.0.0.1 ping statistics --- 00:11:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.554 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:12.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:11:12.554 00:11:12.554 --- 10.0.0.2 ping statistics --- 00:11:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.554 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=82577 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 82577 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 82577 ']' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.554 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.554 [2024-12-14 18:31:28.194066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:12.554 [2024-12-14 18:31:28.194154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.812 [2024-12-14 18:31:28.338902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.812 [2024-12-14 18:31:28.426270] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.812 [2024-12-14 18:31:28.426355] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.812 [2024-12-14 18:31:28.426369] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.812 [2024-12-14 18:31:28.426381] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.812 [2024-12-14 18:31:28.426390] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.812 [2024-12-14 18:31:28.426843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.812 [2024-12-14 18:31:28.426926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.812 [2024-12-14 18:31:28.427121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.812 [2024-12-14 18:31:28.427125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.070 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 [2024-12-14 18:31:28.648383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 Malloc0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 [2024-12-14 18:31:28.717138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:13.071 test case1: single bdev can't be used in multiple subsystems 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 [2024-12-14 18:31:28.740946] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:13.071 [2024-12-14 18:31:28.740979] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:13.071 [2024-12-14 18:31:28.740999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.071 2024/12/14 18:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.071 request: 00:11:13.071 { 00:11:13.071 "method": "nvmf_subsystem_add_ns", 00:11:13.071 "params": { 00:11:13.071 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:13.071 "namespace": { 00:11:13.071 "bdev_name": "Malloc0", 00:11:13.071 "no_auto_visible": false 00:11:13.071 } 00:11:13.071 } 00:11:13.071 } 00:11:13.071 Got JSON-RPC error response 00:11:13.071 GoRPCClient: error on JSON-RPC call 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:13.071 Adding namespace failed - expected result. 00:11:13.071 test case2: host connect to nvmf target in multiple paths 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.071 [2024-12-14 18:31:28.753035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.071 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:13.331 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:13.592 18:31:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.592 18:31:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.592 18:31:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.592 18:31:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.592 18:31:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:15.494 18:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:15.494 [global] 00:11:15.494 thread=1 00:11:15.494 invalidate=1 00:11:15.494 rw=write 00:11:15.494 time_based=1 00:11:15.494 runtime=1 00:11:15.494 ioengine=libaio 00:11:15.494 direct=1 00:11:15.494 bs=4096 00:11:15.494 iodepth=1 00:11:15.494 norandommap=0 00:11:15.494 numjobs=1 00:11:15.494 00:11:15.494 verify_dump=1 00:11:15.494 verify_backlog=512 00:11:15.494 verify_state_save=0 00:11:15.494 do_verify=1 00:11:15.494 verify=crc32c-intel 00:11:15.494 [job0] 00:11:15.494 filename=/dev/nvme0n1 00:11:15.494 Could not set queue depth (nvme0n1) 00:11:15.753 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.753 fio-3.35 00:11:15.753 Starting 1 thread 00:11:17.128 00:11:17.128 job0: (groupid=0, jobs=1): err= 0: pid=82673: Sat Dec 14 18:31:32 2024 00:11:17.128 read: IOPS=3444, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:11:17.128 slat (nsec): min=13511, max=69416, avg=16820.67, stdev=4968.84 00:11:17.128 clat (usec): min=117, max=409, avg=138.86, stdev=16.42 00:11:17.128 lat (usec): min=133, max=425, avg=155.68, stdev=17.05 00:11:17.128 clat percentiles (usec): 00:11:17.128 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:11:17.128 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:17.128 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:11:17.128 | 99.00th=[ 186], 99.50th=[ 196], 99.90th=[ 334], 99.95th=[ 343], 00:11:17.128 | 99.99th=[ 412] 00:11:17.128 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:17.128 slat (nsec): min=19591, max=98552, avg=25393.60, stdev=7429.98 00:11:17.128 clat (usec): min=81, max=731, avg=100.47, stdev=21.26 00:11:17.128 lat (usec): min=104, max=759, avg=125.86, stdev=22.81 00:11:17.128 clat percentiles (usec): 00:11:17.128 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 91], 00:11:17.128 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:11:17.128 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 119], 95.00th=[ 127], 00:11:17.128 | 99.00th=[ 145], 99.50th=[ 198], 99.90th=[ 400], 99.95th=[ 537], 00:11:17.128 | 99.99th=[ 734] 00:11:17.128 bw ( KiB/s): min=16280, max=16280, per=100.00%, avg=16280.00, stdev= 0.00, samples=1 00:11:17.128 iops : min= 4070, max= 4070, avg=4070.00, stdev= 0.00, samples=1 00:11:17.128 lat (usec) : 100=33.86%, 250=65.87%, 500=0.24%, 750=0.03% 00:11:17.128 cpu : usr=2.20%, sys=11.00%, ctx=7036, majf=0, minf=5 00:11:17.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.128 issued rwts: total=3448,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.128 00:11:17.128 Run status group 0 (all jobs): 00:11:17.128 READ: bw=13.5MiB/s (14.1MB/s), 13.5MiB/s-13.5MiB/s (14.1MB/s-14.1MB/s), io=13.5MiB (14.1MB), run=1001-1001msec 00:11:17.128 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:17.128 00:11:17.128 Disk stats (read/write): 00:11:17.128 nvme0n1: ios=3122/3288, merge=0/0, ticks=473/368, in_queue=841, util=91.28% 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.128 rmmod nvme_tcp 00:11:17.128 rmmod nvme_fabrics 00:11:17.128 rmmod nvme_keyring 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 82577 ']' 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 82577 00:11:17.128 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 82577 ']' 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 82577 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82577 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.129 killing process with pid 82577 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82577' 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 82577 00:11:17.129 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 82577 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:17.388 18:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:17.388 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:17.647 00:11:17.647 real 0m5.740s 00:11:17.647 user 0m17.634s 00:11:17.647 sys 0m1.595s 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.647 ************************************ 00:11:17.647 END TEST nvmf_nmic 00:11:17.647 ************************************ 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 ************************************ 00:11:17.647 START TEST nvmf_fio_target 00:11:17.647 ************************************ 00:11:17.647 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.907 * Looking for test storage... 00:11:17.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.907 --rc genhtml_branch_coverage=1 00:11:17.907 --rc genhtml_function_coverage=1 00:11:17.907 --rc genhtml_legend=1 00:11:17.907 --rc geninfo_all_blocks=1 00:11:17.907 --rc geninfo_unexecuted_blocks=1 00:11:17.907 00:11:17.907 ' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.907 --rc genhtml_branch_coverage=1 00:11:17.907 --rc genhtml_function_coverage=1 00:11:17.907 --rc genhtml_legend=1 00:11:17.907 --rc geninfo_all_blocks=1 00:11:17.907 --rc geninfo_unexecuted_blocks=1 00:11:17.907 00:11:17.907 ' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.907 --rc genhtml_branch_coverage=1 00:11:17.907 --rc genhtml_function_coverage=1 00:11:17.907 --rc genhtml_legend=1 00:11:17.907 --rc geninfo_all_blocks=1 00:11:17.907 --rc geninfo_unexecuted_blocks=1 00:11:17.907 00:11:17.907 ' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.907 --rc genhtml_branch_coverage=1 00:11:17.907 --rc genhtml_function_coverage=1 00:11:17.907 --rc genhtml_legend=1 00:11:17.907 --rc geninfo_all_blocks=1 00:11:17.907 --rc geninfo_unexecuted_blocks=1 00:11:17.907 00:11:17.907 ' 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.907 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:17.908 Cannot find device "nvmf_init_br" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:17.908 Cannot find device "nvmf_init_br2" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:17.908 Cannot find device "nvmf_tgt_br" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.908 Cannot find device "nvmf_tgt_br2" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:17.908 Cannot find device "nvmf_init_br" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:17.908 Cannot find device "nvmf_init_br2" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:17.908 Cannot find device "nvmf_tgt_br" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:17.908 Cannot find device "nvmf_tgt_br2" 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:17.908 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:18.168 Cannot find device "nvmf_br" 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:18.168 Cannot find device "nvmf_init_if" 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:18.168 Cannot find device "nvmf_init_if2" 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:18.168 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:18.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.161 ms 00:11:18.427 00:11:18.427 --- 10.0.0.3 ping statistics --- 00:11:18.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.427 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:18.427 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:18.427 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.165 ms 00:11:18.427 00:11:18.427 --- 10.0.0.4 ping statistics --- 00:11:18.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.427 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:18.427 00:11:18.427 --- 10.0.0.1 ping statistics --- 00:11:18.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.427 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:18.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:18.427 00:11:18.427 --- 10.0.0.2 ping statistics --- 00:11:18.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.427 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:11:18.427 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:18.428 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=82901 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 82901 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 82901 ']' 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.428 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.428 [2024-12-14 18:31:34.089265] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:18.428 [2024-12-14 18:31:34.089380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.687 [2024-12-14 18:31:34.232176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.687 [2024-12-14 18:31:34.305655] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.687 [2024-12-14 18:31:34.306033] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.687 [2024-12-14 18:31:34.306151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.687 [2024-12-14 18:31:34.306244] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.687 [2024-12-14 18:31:34.306318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.687 [2024-12-14 18:31:34.306559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.687 [2024-12-14 18:31:34.306716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.687 [2024-12-14 18:31:34.306827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.687 [2024-12-14 18:31:34.306828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.945 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.945 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:18.945 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:18.945 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.945 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.946 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.946 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.204 [2024-12-14 18:31:34.836046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.204 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.463 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:19.463 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.030 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:20.030 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.288 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:20.288 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.547 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:20.547 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:20.805 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.064 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:21.064 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.322 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:21.322 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.889 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:21.889 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:21.889 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.148 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.148 18:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.407 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.407 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.666 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:22.924 [2024-12-14 18:31:38.602819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:22.924 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:23.183 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:23.444 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:23.702 18:31:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:25.606 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:25.606 [global] 00:11:25.606 thread=1 00:11:25.606 invalidate=1 00:11:25.606 rw=write 00:11:25.606 time_based=1 00:11:25.606 runtime=1 00:11:25.606 ioengine=libaio 00:11:25.606 direct=1 00:11:25.606 bs=4096 00:11:25.606 iodepth=1 00:11:25.606 norandommap=0 00:11:25.606 numjobs=1 00:11:25.606 00:11:25.606 verify_dump=1 00:11:25.606 verify_backlog=512 00:11:25.606 verify_state_save=0 00:11:25.606 do_verify=1 00:11:25.606 verify=crc32c-intel 00:11:25.606 [job0] 00:11:25.606 filename=/dev/nvme0n1 00:11:25.606 [job1] 00:11:25.606 filename=/dev/nvme0n2 00:11:25.606 [job2] 00:11:25.606 filename=/dev/nvme0n3 00:11:25.606 [job3] 00:11:25.606 filename=/dev/nvme0n4 00:11:25.865 Could not set queue depth (nvme0n1) 00:11:25.865 Could not set queue depth (nvme0n2) 00:11:25.865 Could not set queue depth (nvme0n3) 00:11:25.865 Could not set queue depth (nvme0n4) 00:11:25.865 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.865 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.865 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.865 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.865 fio-3.35 00:11:25.865 Starting 4 threads 00:11:27.242 00:11:27.243 job0: (groupid=0, jobs=1): err= 0: pid=83193: Sat Dec 14 18:31:42 2024 00:11:27.243 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:27.243 slat (nsec): min=9966, max=75792, avg=14162.49, stdev=4929.92 00:11:27.243 clat (usec): min=216, max=40654, avg=316.77, stdev=1032.14 00:11:27.243 lat (usec): min=229, max=40674, avg=330.94, stdev=1032.36 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:11:27.243 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:11:27.243 | 70.00th=[ 293], 80.00th=[ 347], 90.00th=[ 383], 95.00th=[ 408], 00:11:27.243 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 1729], 99.95th=[40633], 00:11:27.243 | 99.99th=[40633] 00:11:27.243 write: IOPS=1907, BW=7628KiB/s (7811kB/s)(7636KiB/1001msec); 0 zone resets 00:11:27.243 slat (usec): min=15, max=112, avg=24.27, stdev= 7.46 00:11:27.243 clat (usec): min=109, max=1688, avg=230.21, stdev=68.77 00:11:27.243 lat (usec): min=132, max=1709, avg=254.48, stdev=71.01 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:11:27.243 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:11:27.243 | 70.00th=[ 227], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 351], 00:11:27.243 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 474], 99.95th=[ 1696], 00:11:27.243 | 99.99th=[ 1696] 00:11:27.243 bw ( KiB/s): min= 7280, max= 7280, per=23.78%, avg=7280.00, stdev= 0.00, samples=1 00:11:27.243 iops : min= 1820, max= 1820, avg=1820.00, stdev= 0.00, samples=1 00:11:27.243 lat (usec) : 250=51.23%, 500=48.65%, 750=0.03% 00:11:27.243 lat (msec) : 2=0.06%, 50=0.03% 00:11:27.243 cpu : usr=1.50%, sys=5.10%, ctx=3448, majf=0, minf=9 00:11:27.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 issued rwts: total=1536,1909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.243 job1: (groupid=0, jobs=1): err= 0: pid=83194: Sat Dec 14 18:31:42 2024 00:11:27.243 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:27.243 slat (nsec): min=14896, max=72759, avg=20321.03, stdev=6295.85 00:11:27.243 clat (usec): min=144, max=656, avg=301.90, stdev=84.73 00:11:27.243 lat (usec): min=163, max=680, avg=322.22, stdev=87.65 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:11:27.243 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 269], 00:11:27.243 | 70.00th=[ 293], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 478], 00:11:27.243 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 660], 00:11:27.243 | 99.99th=[ 660] 00:11:27.243 write: IOPS=1904, BW=7616KiB/s (7799kB/s)(7624KiB/1001msec); 0 zone resets 00:11:27.243 slat (usec): min=18, max=142, avg=31.36, stdev=12.44 00:11:27.243 clat (usec): min=91, max=689, avg=229.53, stdev=74.96 00:11:27.243 lat (usec): min=136, max=714, avg=260.89, stdev=82.68 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 184], 00:11:27.243 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:11:27.243 | 70.00th=[ 212], 80.00th=[ 293], 90.00th=[ 367], 95.00th=[ 400], 00:11:27.243 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 482], 99.95th=[ 693], 00:11:27.243 | 99.99th=[ 693] 00:11:27.243 bw ( KiB/s): min= 7360, max= 7360, per=24.04%, avg=7360.00, stdev= 0.00, samples=1 00:11:27.243 iops : min= 1840, max= 1840, avg=1840.00, stdev= 0.00, samples=1 00:11:27.243 lat (usec) : 100=0.06%, 250=58.16%, 500=41.08%, 750=0.70% 00:11:27.243 cpu : usr=1.80%, sys=6.60%, ctx=3450, majf=0, minf=12 00:11:27.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 issued rwts: total=1536,1906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.243 job2: (groupid=0, jobs=1): err= 0: pid=83195: Sat Dec 14 18:31:42 2024 00:11:27.243 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:27.243 slat (nsec): min=9642, max=63261, avg=15960.41, stdev=5438.23 00:11:27.243 clat (usec): min=211, max=40706, avg=315.02, stdev=1033.55 00:11:27.243 lat (usec): min=234, max=40719, avg=330.98, stdev=1033.58 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:11:27.243 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:11:27.243 | 70.00th=[ 289], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 404], 00:11:27.243 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 1778], 99.95th=[40633], 00:11:27.243 | 99.99th=[40633] 00:11:27.243 write: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec); 0 zone resets 00:11:27.243 slat (nsec): min=17841, max=80466, avg=24380.28, stdev=7077.91 00:11:27.243 clat (usec): min=111, max=1630, avg=229.98, stdev=68.17 00:11:27.243 lat (usec): min=137, max=1651, avg=254.36, stdev=70.02 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:11:27.243 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:11:27.243 | 70.00th=[ 225], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 351], 00:11:27.243 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 486], 99.95th=[ 1631], 00:11:27.243 | 99.99th=[ 1631] 00:11:27.243 bw ( KiB/s): min= 7280, max= 7280, per=23.78%, avg=7280.00, stdev= 0.00, samples=1 00:11:27.243 iops : min= 1820, max= 1820, avg=1820.00, stdev= 0.00, samples=1 00:11:27.243 lat (usec) : 250=52.32%, 500=47.53%, 750=0.06% 00:11:27.243 lat (msec) : 2=0.06%, 50=0.03% 00:11:27.243 cpu : usr=1.60%, sys=5.00%, ctx=3446, majf=0, minf=9 00:11:27.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 issued rwts: total=1536,1910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.243 job3: (groupid=0, jobs=1): err= 0: pid=83196: Sat Dec 14 18:31:42 2024 00:11:27.243 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:27.243 slat (usec): min=12, max=108, avg=22.15, stdev=11.61 00:11:27.243 clat (usec): min=134, max=599, avg=296.38, stdev=79.00 00:11:27.243 lat (usec): min=165, max=629, avg=318.53, stdev=87.61 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:11:27.243 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:27.243 | 70.00th=[ 285], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 465], 00:11:27.243 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 586], 99.95th=[ 603], 00:11:27.243 | 99.99th=[ 603] 00:11:27.243 write: IOPS=1935, BW=7740KiB/s (7926kB/s)(7748KiB/1001msec); 0 zone resets 00:11:27.243 slat (usec): min=17, max=171, avg=30.32, stdev=12.68 00:11:27.243 clat (usec): min=105, max=675, avg=229.17, stdev=75.54 00:11:27.243 lat (usec): min=123, max=698, avg=259.49, stdev=83.84 00:11:27.243 clat percentiles (usec): 00:11:27.243 | 1.00th=[ 124], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 186], 00:11:27.243 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:11:27.243 | 70.00th=[ 210], 80.00th=[ 285], 90.00th=[ 363], 95.00th=[ 408], 00:11:27.243 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 562], 99.95th=[ 676], 00:11:27.243 | 99.99th=[ 676] 00:11:27.243 bw ( KiB/s): min= 7384, max= 7384, per=24.12%, avg=7384.00, stdev= 0.00, samples=1 00:11:27.243 iops : min= 1846, max= 1846, avg=1846.00, stdev= 0.00, samples=1 00:11:27.243 lat (usec) : 250=57.59%, 500=41.84%, 750=0.58% 00:11:27.243 cpu : usr=1.30%, sys=7.60%, ctx=3474, majf=0, minf=7 00:11:27.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.243 issued rwts: total=1536,1937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.243 00:11:27.243 Run status group 0 (all jobs): 00:11:27.243 READ: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:11:27.243 WRITE: bw=29.9MiB/s (31.4MB/s), 7616KiB/s-7740KiB/s (7799kB/s-7926kB/s), io=29.9MiB (31.4MB), run=1001-1001msec 00:11:27.243 00:11:27.243 Disk stats (read/write): 00:11:27.243 nvme0n1: ios=1351/1536, merge=0/0, ticks=449/381, in_queue=830, util=87.58% 00:11:27.243 nvme0n2: ios=1348/1536, merge=0/0, ticks=430/392, in_queue=822, util=88.22% 00:11:27.243 nvme0n3: ios=1301/1536, merge=0/0, ticks=417/375, in_queue=792, util=89.05% 00:11:27.243 nvme0n4: ios=1320/1536, merge=0/0, ticks=416/380, in_queue=796, util=89.72% 00:11:27.243 18:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:27.243 [global] 00:11:27.243 thread=1 00:11:27.243 invalidate=1 00:11:27.243 rw=randwrite 00:11:27.243 time_based=1 00:11:27.243 runtime=1 00:11:27.243 ioengine=libaio 00:11:27.243 direct=1 00:11:27.243 bs=4096 00:11:27.243 iodepth=1 00:11:27.243 norandommap=0 00:11:27.243 numjobs=1 00:11:27.243 00:11:27.243 verify_dump=1 00:11:27.243 verify_backlog=512 00:11:27.243 verify_state_save=0 00:11:27.243 do_verify=1 00:11:27.243 verify=crc32c-intel 00:11:27.243 [job0] 00:11:27.243 filename=/dev/nvme0n1 00:11:27.243 [job1] 00:11:27.243 filename=/dev/nvme0n2 00:11:27.243 [job2] 00:11:27.243 filename=/dev/nvme0n3 00:11:27.243 [job3] 00:11:27.243 filename=/dev/nvme0n4 00:11:27.243 Could not set queue depth (nvme0n1) 00:11:27.243 Could not set queue depth (nvme0n2) 00:11:27.243 Could not set queue depth (nvme0n3) 00:11:27.243 Could not set queue depth (nvme0n4) 00:11:27.243 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.243 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.243 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.243 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.243 fio-3.35 00:11:27.243 Starting 4 threads 00:11:28.621 00:11:28.621 job0: (groupid=0, jobs=1): err= 0: pid=83249: Sat Dec 14 18:31:44 2024 00:11:28.621 read: IOPS=1810, BW=7241KiB/s (7415kB/s)(7248KiB/1001msec) 00:11:28.621 slat (nsec): min=8380, max=57588, avg=14769.74, stdev=4970.69 00:11:28.621 clat (usec): min=142, max=5833, avg=279.99, stdev=183.19 00:11:28.621 lat (usec): min=166, max=5867, avg=294.76, stdev=183.80 00:11:28.621 clat percentiles (usec): 00:11:28.621 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:11:28.621 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:11:28.621 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 351], 00:11:28.621 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 4047], 99.95th=[ 5866], 00:11:28.621 | 99.99th=[ 5866] 00:11:28.621 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:28.621 slat (nsec): min=10939, max=95109, avg=19919.59, stdev=7097.42 00:11:28.621 clat (usec): min=100, max=7624, avg=204.64, stdev=178.92 00:11:28.621 lat (usec): min=121, max=7648, avg=224.56, stdev=179.45 00:11:28.621 clat percentiles (usec): 00:11:28.621 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 157], 00:11:28.621 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 206], 60.00th=[ 223], 00:11:28.621 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:11:28.621 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 791], 99.95th=[ 2507], 00:11:28.621 | 99.99th=[ 7635] 00:11:28.621 bw ( KiB/s): min= 8192, max= 8192, per=19.18%, avg=8192.00, stdev= 0.00, samples=1 00:11:28.621 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:28.621 lat (usec) : 250=63.83%, 500=35.83%, 750=0.18%, 1000=0.03% 00:11:28.621 lat (msec) : 4=0.05%, 10=0.08% 00:11:28.621 cpu : usr=1.10%, sys=5.80%, ctx=3860, majf=0, minf=17 00:11:28.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.621 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.621 job1: (groupid=0, jobs=1): err= 0: pid=83250: Sat Dec 14 18:31:44 2024 00:11:28.621 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:28.621 slat (nsec): min=12022, max=60424, avg=15101.64, stdev=4388.02 00:11:28.621 clat (usec): min=130, max=2466, avg=155.07, stdev=46.97 00:11:28.621 lat (usec): min=144, max=2483, avg=170.17, stdev=47.59 00:11:28.621 clat percentiles (usec): 00:11:28.621 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:11:28.621 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:11:28.621 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:11:28.621 | 99.00th=[ 198], 99.50th=[ 215], 99.90th=[ 445], 99.95th=[ 979], 00:11:28.621 | 99.99th=[ 2474] 00:11:28.621 write: IOPS=3325, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:11:28.621 slat (usec): min=17, max=123, avg=21.66, stdev= 5.49 00:11:28.621 clat (usec): min=88, max=662, avg=118.39, stdev=13.84 00:11:28.621 lat (usec): min=115, max=715, avg=140.05, stdev=15.89 00:11:28.621 clat percentiles (usec): 00:11:28.621 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 111], 00:11:28.621 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:11:28.621 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:11:28.621 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 184], 99.95th=[ 239], 00:11:28.621 | 99.99th=[ 660] 00:11:28.621 bw ( KiB/s): min=12688, max=12688, per=29.71%, avg=12688.00, stdev= 0.00, samples=1 00:11:28.621 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:11:28.621 lat (usec) : 100=0.39%, 250=99.45%, 500=0.09%, 750=0.03%, 1000=0.02% 00:11:28.621 lat (msec) : 4=0.02% 00:11:28.621 cpu : usr=2.50%, sys=8.90%, ctx=6402, majf=0, minf=5 00:11:28.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.621 issued rwts: total=3072,3329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.621 job2: (groupid=0, jobs=1): err= 0: pid=83251: Sat Dec 14 18:31:44 2024 00:11:28.621 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:28.621 slat (nsec): min=8507, max=75523, avg=13743.91, stdev=3251.46 00:11:28.621 clat (usec): min=106, max=755, avg=244.44, stdev=60.71 00:11:28.621 lat (usec): min=152, max=779, avg=258.19, stdev=60.72 00:11:28.621 clat percentiles (usec): 00:11:28.621 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 169], 00:11:28.621 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:11:28.621 | 70.00th=[ 265], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 338], 00:11:28.621 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 537], 99.95th=[ 537], 00:11:28.622 | 99.99th=[ 758] 00:11:28.622 write: IOPS=2369, BW=9479KiB/s (9706kB/s)(9488KiB/1001msec); 0 zone resets 00:11:28.622 slat (nsec): min=11168, max=87153, avg=18554.32, stdev=5525.94 00:11:28.622 clat (usec): min=104, max=489, avg=177.17, stdev=53.89 00:11:28.622 lat (usec): min=125, max=501, avg=195.73, stdev=52.99 00:11:28.622 clat percentiles (usec): 00:11:28.622 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 126], 00:11:28.622 | 30.00th=[ 131], 40.00th=[ 139], 50.00th=[ 161], 60.00th=[ 180], 00:11:28.622 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 260], 00:11:28.622 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 330], 99.95th=[ 338], 00:11:28.622 | 99.99th=[ 490] 00:11:28.622 bw ( KiB/s): min=11448, max=11448, per=26.81%, avg=11448.00, stdev= 0.00, samples=1 00:11:28.622 iops : min= 2862, max= 2862, avg=2862.00, stdev= 0.00, samples=1 00:11:28.622 lat (usec) : 250=72.67%, 500=27.24%, 750=0.07%, 1000=0.02% 00:11:28.622 cpu : usr=1.80%, sys=5.50%, ctx=4422, majf=0, minf=14 00:11:28.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.622 issued rwts: total=2048,2372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.622 job3: (groupid=0, jobs=1): err= 0: pid=83252: Sat Dec 14 18:31:44 2024 00:11:28.622 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:28.622 slat (nsec): min=12217, max=66780, avg=15238.83, stdev=5202.58 00:11:28.622 clat (usec): min=141, max=1132, avg=181.00, stdev=51.37 00:11:28.622 lat (usec): min=158, max=1156, avg=196.24, stdev=52.72 00:11:28.622 clat percentiles (usec): 00:11:28.622 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:11:28.622 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:28.622 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 277], 00:11:28.622 | 99.00th=[ 330], 99.50th=[ 449], 99.90th=[ 971], 99.95th=[ 979], 00:11:28.622 | 99.99th=[ 1139] 00:11:28.622 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:11:28.622 slat (usec): min=17, max=100, avg=21.65, stdev= 5.92 00:11:28.622 clat (usec): min=105, max=909, avg=144.95, stdev=36.90 00:11:28.622 lat (usec): min=127, max=954, avg=166.60, stdev=38.69 00:11:28.622 clat percentiles (usec): 00:11:28.622 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:11:28.622 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:11:28.622 | 70.00th=[ 143], 80.00th=[ 155], 90.00th=[ 206], 95.00th=[ 219], 00:11:28.622 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 392], 99.95th=[ 498], 00:11:28.622 | 99.99th=[ 914] 00:11:28.622 bw ( KiB/s): min=12288, max=12288, per=28.78%, avg=12288.00, stdev= 0.00, samples=1 00:11:28.622 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:28.622 lat (usec) : 250=96.18%, 500=3.60%, 750=0.13%, 1000=0.07% 00:11:28.622 lat (msec) : 2=0.02% 00:11:28.622 cpu : usr=2.00%, sys=7.30%, ctx=5498, majf=0, minf=11 00:11:28.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.622 issued rwts: total=2560,2937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.622 00:11:28.622 Run status group 0 (all jobs): 00:11:28.622 READ: bw=37.0MiB/s (38.8MB/s), 7241KiB/s-12.0MiB/s (7415kB/s-12.6MB/s), io=37.1MiB (38.9MB), run=1001-1001msec 00:11:28.622 WRITE: bw=41.7MiB/s (43.7MB/s), 8184KiB/s-13.0MiB/s (8380kB/s-13.6MB/s), io=41.7MiB (43.8MB), run=1001-1001msec 00:11:28.622 00:11:28.622 Disk stats (read/write): 00:11:28.622 nvme0n1: ios=1586/1805, merge=0/0, ticks=468/364, in_queue=832, util=89.08% 00:11:28.622 nvme0n2: ios=2609/2990, merge=0/0, ticks=431/384, in_queue=815, util=89.40% 00:11:28.622 nvme0n3: ios=1859/2048, merge=0/0, ticks=473/356, in_queue=829, util=89.74% 00:11:28.622 nvme0n4: ios=2159/2560, merge=0/0, ticks=401/391, in_queue=792, util=89.76% 00:11:28.622 18:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:28.622 [global] 00:11:28.622 thread=1 00:11:28.622 invalidate=1 00:11:28.622 rw=write 00:11:28.622 time_based=1 00:11:28.622 runtime=1 00:11:28.622 ioengine=libaio 00:11:28.622 direct=1 00:11:28.622 bs=4096 00:11:28.622 iodepth=128 00:11:28.622 norandommap=0 00:11:28.622 numjobs=1 00:11:28.622 00:11:28.622 verify_dump=1 00:11:28.622 verify_backlog=512 00:11:28.622 verify_state_save=0 00:11:28.622 do_verify=1 00:11:28.622 verify=crc32c-intel 00:11:28.622 [job0] 00:11:28.622 filename=/dev/nvme0n1 00:11:28.622 [job1] 00:11:28.622 filename=/dev/nvme0n2 00:11:28.622 [job2] 00:11:28.622 filename=/dev/nvme0n3 00:11:28.622 [job3] 00:11:28.622 filename=/dev/nvme0n4 00:11:28.622 Could not set queue depth (nvme0n1) 00:11:28.622 Could not set queue depth (nvme0n2) 00:11:28.622 Could not set queue depth (nvme0n3) 00:11:28.622 Could not set queue depth (nvme0n4) 00:11:28.622 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.622 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.622 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.622 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.622 fio-3.35 00:11:28.622 Starting 4 threads 00:11:29.996 00:11:29.996 job0: (groupid=0, jobs=1): err= 0: pid=83316: Sat Dec 14 18:31:45 2024 00:11:29.996 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:29.996 slat (usec): min=3, max=3761, avg=93.28, stdev=487.21 00:11:29.996 clat (usec): min=7789, max=18608, avg=12625.96, stdev=1084.98 00:11:29.996 lat (usec): min=7802, max=18652, avg=12719.24, stdev=1133.44 00:11:29.996 clat percentiles (usec): 00:11:29.996 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11600], 20.00th=[12125], 00:11:29.996 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:11:29.996 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:11:29.996 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16450], 99.95th=[16581], 00:11:29.997 | 99.99th=[18482] 00:11:29.997 write: IOPS=5154, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1003msec); 0 zone resets 00:11:29.997 slat (usec): min=9, max=5821, avg=93.71, stdev=460.58 00:11:29.997 clat (usec): min=300, max=16152, avg=12000.25, stdev=1517.23 00:11:29.997 lat (usec): min=3579, max=16193, avg=12093.96, stdev=1497.19 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:11:29.997 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:11:29.997 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:11:29.997 | 99.00th=[14484], 99.50th=[14484], 99.90th=[15533], 99.95th=[15664], 00:11:29.997 | 99.99th=[16188] 00:11:29.997 bw ( KiB/s): min=20439, max=20480, per=26.22%, avg=20459.50, stdev=28.99, samples=2 00:11:29.997 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:11:29.997 lat (usec) : 500=0.01% 00:11:29.997 lat (msec) : 4=0.16%, 10=9.12%, 20=90.72% 00:11:29.997 cpu : usr=4.39%, sys=14.47%, ctx=363, majf=0, minf=5 00:11:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.997 issued rwts: total=5120,5170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.997 job1: (groupid=0, jobs=1): err= 0: pid=83317: Sat Dec 14 18:31:45 2024 00:11:29.997 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:11:29.997 slat (usec): min=8, max=6377, avg=93.79, stdev=497.74 00:11:29.997 clat (usec): min=8639, max=18603, avg=12496.01, stdev=990.35 00:11:29.997 lat (usec): min=8657, max=18636, avg=12589.79, stdev=1066.56 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[11994], 20.00th=[12125], 00:11:29.997 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:11:29.997 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13304], 95.00th=[13960], 00:11:29.997 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16712], 99.95th=[18220], 00:11:29.997 | 99.99th=[18482] 00:11:29.997 write: IOPS=5199, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec); 0 zone resets 00:11:29.997 slat (usec): min=10, max=3815, avg=92.27, stdev=436.55 00:11:29.997 clat (usec): min=295, max=16194, avg=12038.82, stdev=1536.42 00:11:29.997 lat (usec): min=3211, max=16293, avg=12131.08, stdev=1508.26 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[ 7504], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[11600], 00:11:29.997 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:11:29.997 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:11:29.997 | 99.00th=[14353], 99.50th=[15008], 99.90th=[15795], 99.95th=[15795], 00:11:29.997 | 99.99th=[16188] 00:11:29.997 bw ( KiB/s): min=20447, max=20480, per=26.22%, avg=20463.50, stdev=23.33, samples=2 00:11:29.997 iops : min= 5111, max= 5120, avg=5115.50, stdev= 6.36, samples=2 00:11:29.997 lat (usec) : 500=0.01% 00:11:29.997 lat (msec) : 4=0.28%, 10=9.46%, 20=90.25% 00:11:29.997 cpu : usr=5.08%, sys=13.86%, ctx=384, majf=0, minf=3 00:11:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.997 issued rwts: total=5120,5220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.997 job2: (groupid=0, jobs=1): err= 0: pid=83320: Sat Dec 14 18:31:45 2024 00:11:29.997 read: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1005msec) 00:11:29.997 slat (usec): min=4, max=5136, avg=111.59, stdev=518.35 00:11:29.997 clat (usec): min=2095, max=30577, avg=14477.43, stdev=2632.92 00:11:29.997 lat (usec): min=4956, max=30588, avg=14589.02, stdev=2601.94 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[10683], 5.00th=[11600], 10.00th=[12780], 20.00th=[13698], 00:11:29.997 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:11:29.997 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[16712], 00:11:29.997 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30278], 99.95th=[30540], 00:11:29.997 | 99.99th=[30540] 00:11:29.997 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:29.997 slat (usec): min=10, max=3435, avg=102.81, stdev=442.67 00:11:29.997 clat (usec): min=10932, max=16819, avg=13639.86, stdev=1407.61 00:11:29.997 lat (usec): min=10952, max=16840, avg=13742.67, stdev=1399.95 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:11:29.997 | 30.00th=[12256], 40.00th=[12911], 50.00th=[14222], 60.00th=[14484], 00:11:29.997 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15533], 00:11:29.997 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:11:29.997 | 99.99th=[16909] 00:11:29.997 bw ( KiB/s): min=17652, max=19176, per=23.60%, avg=18414.00, stdev=1077.63, samples=2 00:11:29.997 iops : min= 4413, max= 4794, avg=4603.50, stdev=269.41, samples=2 00:11:29.997 lat (msec) : 4=0.01%, 10=0.36%, 20=97.62%, 50=2.01% 00:11:29.997 cpu : usr=5.18%, sys=11.85%, ctx=496, majf=0, minf=3 00:11:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.997 issued rwts: total=4399,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.997 job3: (groupid=0, jobs=1): err= 0: pid=83321: Sat Dec 14 18:31:45 2024 00:11:29.997 read: IOPS=4176, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1002msec) 00:11:29.997 slat (usec): min=4, max=6999, avg=112.71, stdev=528.01 00:11:29.997 clat (usec): min=1198, max=20645, avg=14424.19, stdev=1479.64 00:11:29.997 lat (usec): min=3812, max=20657, avg=14536.90, stdev=1401.98 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[10028], 5.00th=[11863], 10.00th=[13173], 20.00th=[14091], 00:11:29.997 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:11:29.997 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[16319], 00:11:29.997 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20579], 99.95th=[20579], 00:11:29.997 | 99.99th=[20579] 00:11:29.997 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:29.997 slat (usec): min=8, max=3521, avg=107.24, stdev=431.34 00:11:29.997 clat (usec): min=10752, max=27786, avg=14330.93, stdev=2145.59 00:11:29.997 lat (usec): min=10912, max=27805, avg=14438.16, stdev=2145.30 00:11:29.997 clat percentiles (usec): 00:11:29.997 | 1.00th=[11338], 5.00th=[11863], 10.00th=[12125], 20.00th=[12518], 00:11:29.997 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14353], 60.00th=[14746], 00:11:29.997 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16450], 95.00th=[17171], 00:11:29.997 | 99.00th=[23725], 99.50th=[25035], 99.90th=[26870], 99.95th=[27657], 00:11:29.997 | 99.99th=[27657] 00:11:29.997 bw ( KiB/s): min=18192, max=18331, per=23.40%, avg=18261.50, stdev=98.29, samples=2 00:11:29.997 iops : min= 4548, max= 4582, avg=4565.00, stdev=24.04, samples=2 00:11:29.997 lat (msec) : 2=0.01%, 4=0.10%, 10=0.38%, 20=98.24%, 50=1.27% 00:11:29.997 cpu : usr=3.60%, sys=13.69%, ctx=493, majf=0, minf=4 00:11:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.997 issued rwts: total=4185,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.997 00:11:29.997 Run status group 0 (all jobs): 00:11:29.997 READ: bw=73.2MiB/s (76.7MB/s), 16.3MiB/s-19.9MiB/s (17.1MB/s-20.9MB/s), io=73.5MiB (77.1MB), run=1002-1005msec 00:11:29.997 WRITE: bw=76.2MiB/s (79.9MB/s), 17.9MiB/s-20.3MiB/s (18.8MB/s-21.3MB/s), io=76.6MiB (80.3MB), run=1002-1005msec 00:11:29.997 00:11:29.997 Disk stats (read/write): 00:11:29.997 nvme0n1: ios=4330/4608, merge=0/0, ticks=16031/15693, in_queue=31724, util=88.89% 00:11:29.997 nvme0n2: ios=4336/4608, merge=0/0, ticks=15899/15808, in_queue=31707, util=89.39% 00:11:29.997 nvme0n3: ios=3840/4096, merge=0/0, ticks=12273/12158, in_queue=24431, util=89.19% 00:11:29.997 nvme0n4: ios=3630/4096, merge=0/0, ticks=12098/12585, in_queue=24683, util=89.74% 00:11:29.997 18:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:29.997 [global] 00:11:29.997 thread=1 00:11:29.997 invalidate=1 00:11:29.997 rw=randwrite 00:11:29.997 time_based=1 00:11:29.997 runtime=1 00:11:29.997 ioengine=libaio 00:11:29.997 direct=1 00:11:29.997 bs=4096 00:11:29.997 iodepth=128 00:11:29.997 norandommap=0 00:11:29.997 numjobs=1 00:11:29.997 00:11:29.997 verify_dump=1 00:11:29.997 verify_backlog=512 00:11:29.997 verify_state_save=0 00:11:29.997 do_verify=1 00:11:29.997 verify=crc32c-intel 00:11:29.997 [job0] 00:11:29.997 filename=/dev/nvme0n1 00:11:29.997 [job1] 00:11:29.997 filename=/dev/nvme0n2 00:11:29.997 [job2] 00:11:29.997 filename=/dev/nvme0n3 00:11:29.997 [job3] 00:11:29.997 filename=/dev/nvme0n4 00:11:29.997 Could not set queue depth (nvme0n1) 00:11:29.997 Could not set queue depth (nvme0n2) 00:11:29.997 Could not set queue depth (nvme0n3) 00:11:29.997 Could not set queue depth (nvme0n4) 00:11:29.997 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.997 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.997 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.997 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.997 fio-3.35 00:11:29.997 Starting 4 threads 00:11:31.376 00:11:31.376 job0: (groupid=0, jobs=1): err= 0: pid=83374: Sat Dec 14 18:31:46 2024 00:11:31.376 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:11:31.376 slat (usec): min=3, max=11473, avg=112.76, stdev=676.72 00:11:31.376 clat (usec): min=4732, max=33041, avg=14396.77, stdev=4936.49 00:11:31.376 lat (usec): min=4744, max=40044, avg=14509.54, stdev=4998.57 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10028], 00:11:31.376 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12649], 60.00th=[15401], 00:11:31.376 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21627], 95.00th=[22938], 00:11:31.376 | 99.00th=[28443], 99.50th=[29492], 99.90th=[31589], 99.95th=[31851], 00:11:31.376 | 99.99th=[33162] 00:11:31.376 write: IOPS=4782, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1008msec); 0 zone resets 00:11:31.376 slat (usec): min=4, max=11391, avg=93.36, stdev=497.42 00:11:31.376 clat (usec): min=2262, max=29459, avg=12732.80, stdev=4858.24 00:11:31.376 lat (usec): min=3731, max=29497, avg=12826.16, stdev=4895.75 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 9110], 00:11:31.376 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:11:31.376 | 70.00th=[14353], 80.00th=[18220], 90.00th=[20317], 95.00th=[21890], 00:11:31.376 | 99.00th=[23200], 99.50th=[23987], 99.90th=[26608], 99.95th=[26608], 00:11:31.376 | 99.99th=[29492] 00:11:31.376 bw ( KiB/s): min=12968, max=24576, per=30.30%, avg=18772.00, stdev=8208.10, samples=2 00:11:31.376 iops : min= 3242, max= 6144, avg=4693.00, stdev=2052.02, samples=2 00:11:31.376 lat (msec) : 4=0.07%, 10=23.50%, 20=64.40%, 50=12.03% 00:11:31.376 cpu : usr=4.47%, sys=11.72%, ctx=851, majf=0, minf=9 00:11:31.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.376 issued rwts: total=4608,4821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.376 job1: (groupid=0, jobs=1): err= 0: pid=83375: Sat Dec 14 18:31:46 2024 00:11:31.376 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:11:31.376 slat (usec): min=3, max=8884, avg=164.96, stdev=796.09 00:11:31.376 clat (usec): min=345, max=35561, avg=20460.74, stdev=3928.90 00:11:31.376 lat (usec): min=6055, max=43993, avg=20625.70, stdev=3993.87 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 6587], 5.00th=[15533], 10.00th=[16188], 20.00th=[17957], 00:11:31.376 | 30.00th=[18482], 40.00th=[19268], 50.00th=[20579], 60.00th=[21627], 00:11:31.376 | 70.00th=[22152], 80.00th=[22938], 90.00th=[25822], 95.00th=[26346], 00:11:31.376 | 99.00th=[29492], 99.50th=[32900], 99.90th=[35390], 99.95th=[35390], 00:11:31.376 | 99.99th=[35390] 00:11:31.376 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:11:31.376 slat (usec): min=4, max=10216, avg=156.91, stdev=761.15 00:11:31.376 clat (usec): min=10295, max=32252, avg=20930.47, stdev=2802.15 00:11:31.376 lat (usec): min=10315, max=33989, avg=21087.38, stdev=2890.94 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[14222], 5.00th=[16450], 10.00th=[17695], 20.00th=[18744], 00:11:31.376 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:11:31.376 | 70.00th=[21890], 80.00th=[23462], 90.00th=[24511], 95.00th=[25297], 00:11:31.376 | 99.00th=[27657], 99.50th=[28705], 99.90th=[31065], 99.95th=[32113], 00:11:31.376 | 99.99th=[32375] 00:11:31.376 bw ( KiB/s): min=12288, max=12288, per=19.84%, avg=12288.00, stdev= 0.00, samples=2 00:11:31.376 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:31.376 lat (usec) : 500=0.02% 00:11:31.376 lat (msec) : 10=1.03%, 20=37.39%, 50=61.56% 00:11:31.376 cpu : usr=3.39%, sys=7.38%, ctx=845, majf=0, minf=17 00:11:31.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.376 issued rwts: total=3024,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.376 job2: (groupid=0, jobs=1): err= 0: pid=83376: Sat Dec 14 18:31:46 2024 00:11:31.376 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:11:31.376 slat (usec): min=3, max=10564, avg=169.54, stdev=833.54 00:11:31.376 clat (usec): min=8319, max=33873, avg=20808.74, stdev=3536.56 00:11:31.376 lat (usec): min=8330, max=33892, avg=20978.29, stdev=3612.62 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[12780], 5.00th=[15270], 10.00th=[16712], 20.00th=[17957], 00:11:31.376 | 30.00th=[18744], 40.00th=[19792], 50.00th=[21365], 60.00th=[21890], 00:11:31.376 | 70.00th=[22414], 80.00th=[23200], 90.00th=[25297], 95.00th=[26608], 00:11:31.376 | 99.00th=[30540], 99.50th=[31851], 99.90th=[33162], 99.95th=[33162], 00:11:31.376 | 99.99th=[33817] 00:11:31.376 write: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1008msec); 0 zone resets 00:11:31.376 slat (usec): min=5, max=10585, avg=146.68, stdev=711.13 00:11:31.376 clat (usec): min=6931, max=33862, avg=20448.71, stdev=3437.88 00:11:31.376 lat (usec): min=6954, max=33869, avg=20595.39, stdev=3518.37 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 8160], 5.00th=[13566], 10.00th=[16188], 20.00th=[17695], 00:11:31.376 | 30.00th=[19530], 40.00th=[20317], 50.00th=[20841], 60.00th=[21103], 00:11:31.376 | 70.00th=[21890], 80.00th=[23200], 90.00th=[24511], 95.00th=[25035], 00:11:31.376 | 99.00th=[27657], 99.50th=[28443], 99.90th=[30540], 99.95th=[33162], 00:11:31.376 | 99.99th=[33817] 00:11:31.376 bw ( KiB/s): min=12288, max=12312, per=19.86%, avg=12300.00, stdev=16.97, samples=2 00:11:31.376 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:31.376 lat (msec) : 10=0.79%, 20=37.42%, 50=61.79% 00:11:31.376 cpu : usr=3.28%, sys=7.65%, ctx=874, majf=0, minf=13 00:11:31.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.376 issued rwts: total=3072,3109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.376 job3: (groupid=0, jobs=1): err= 0: pid=83377: Sat Dec 14 18:31:46 2024 00:11:31.376 read: IOPS=4121, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1008msec) 00:11:31.376 slat (usec): min=2, max=10762, avg=120.78, stdev=720.50 00:11:31.376 clat (usec): min=2530, max=29585, avg=15530.53, stdev=4830.74 00:11:31.376 lat (usec): min=4658, max=30181, avg=15651.31, stdev=4886.11 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11207], 00:11:31.376 | 30.00th=[11994], 40.00th=[12256], 50.00th=[14222], 60.00th=[16057], 00:11:31.376 | 70.00th=[18744], 80.00th=[20317], 90.00th=[22414], 95.00th=[23987], 00:11:31.376 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28705], 99.95th=[28967], 00:11:31.376 | 99.99th=[29492] 00:11:31.376 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:11:31.376 slat (usec): min=4, max=9787, avg=101.69, stdev=565.28 00:11:31.376 clat (usec): min=3618, max=28677, avg=13707.70, stdev=3936.78 00:11:31.376 lat (usec): min=3646, max=28709, avg=13809.38, stdev=3986.99 00:11:31.376 clat percentiles (usec): 00:11:31.376 | 1.00th=[ 5211], 5.00th=[ 7767], 10.00th=[ 9896], 20.00th=[11338], 00:11:31.376 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:11:31.376 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19530], 95.00th=[21365], 00:11:31.376 | 99.00th=[24773], 99.50th=[25822], 99.90th=[27395], 99.95th=[27395], 00:11:31.376 | 99.99th=[28705] 00:11:31.376 bw ( KiB/s): min=14816, max=21531, per=29.34%, avg=18173.50, stdev=4748.22, samples=2 00:11:31.376 iops : min= 3704, max= 5382, avg=4543.00, stdev=1186.53, samples=2 00:11:31.376 lat (msec) : 4=0.05%, 10=8.08%, 20=76.61%, 50=15.26% 00:11:31.376 cpu : usr=4.07%, sys=11.52%, ctx=757, majf=0, minf=11 00:11:31.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.376 issued rwts: total=4154,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.376 00:11:31.376 Run status group 0 (all jobs): 00:11:31.376 READ: bw=57.6MiB/s (60.4MB/s), 11.8MiB/s-17.9MiB/s (12.3MB/s-18.7MB/s), io=58.0MiB (60.9MB), run=1004-1008msec 00:11:31.376 WRITE: bw=60.5MiB/s (63.4MB/s), 12.0MiB/s-18.7MiB/s (12.5MB/s-19.6MB/s), io=61.0MiB (63.9MB), run=1004-1008msec 00:11:31.376 00:11:31.376 Disk stats (read/write): 00:11:31.376 nvme0n1: ios=4146/4396, merge=0/0, ticks=42652/40790, in_queue=83442, util=89.38% 00:11:31.376 nvme0n2: ios=2527/2560, merge=0/0, ticks=24883/25665, in_queue=50548, util=87.26% 00:11:31.376 nvme0n3: ios=2589/2591, merge=0/0, ticks=25904/25207, in_queue=51111, util=90.19% 00:11:31.376 nvme0n4: ios=3672/4096, merge=0/0, ticks=41642/41630, in_queue=83272, util=89.60% 00:11:31.376 18:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:31.376 18:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=83391 00:11:31.376 18:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:31.376 18:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:31.376 [global] 00:11:31.376 thread=1 00:11:31.376 invalidate=1 00:11:31.376 rw=read 00:11:31.376 time_based=1 00:11:31.376 runtime=10 00:11:31.376 ioengine=libaio 00:11:31.376 direct=1 00:11:31.376 bs=4096 00:11:31.376 iodepth=1 00:11:31.376 norandommap=1 00:11:31.376 numjobs=1 00:11:31.376 00:11:31.376 [job0] 00:11:31.376 filename=/dev/nvme0n1 00:11:31.376 [job1] 00:11:31.376 filename=/dev/nvme0n2 00:11:31.376 [job2] 00:11:31.376 filename=/dev/nvme0n3 00:11:31.376 [job3] 00:11:31.376 filename=/dev/nvme0n4 00:11:31.376 Could not set queue depth (nvme0n1) 00:11:31.376 Could not set queue depth (nvme0n2) 00:11:31.376 Could not set queue depth (nvme0n3) 00:11:31.376 Could not set queue depth (nvme0n4) 00:11:31.377 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.377 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.377 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.377 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.377 fio-3.35 00:11:31.377 Starting 4 threads 00:11:34.687 18:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:34.687 fio: pid=83434, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:34.687 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=62357504, buflen=4096 00:11:34.687 18:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:34.945 fio: pid=83433, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:34.945 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69632000, buflen=4096 00:11:34.945 18:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.945 18:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:35.204 fio: pid=83431, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:35.204 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51929088, buflen=4096 00:11:35.204 18:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.204 18:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:35.464 fio: pid=83432, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:35.464 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57356288, buflen=4096 00:11:35.464 00:11:35.464 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83431: Sat Dec 14 18:31:51 2024 00:11:35.464 read: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(49.5MiB/3457msec) 00:11:35.464 slat (usec): min=10, max=10797, avg=17.71, stdev=159.42 00:11:35.464 clat (usec): min=115, max=2941, avg=253.20, stdev=68.48 00:11:35.464 lat (usec): min=128, max=11010, avg=270.91, stdev=172.58 00:11:35.464 clat percentiles (usec): 00:11:35.464 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 247], 00:11:35.464 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:11:35.464 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:11:35.464 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 676], 99.95th=[ 1029], 00:11:35.464 | 99.99th=[ 2507] 00:11:35.464 bw ( KiB/s): min=13752, max=14296, per=22.33%, avg=14001.33, stdev=249.74, samples=6 00:11:35.464 iops : min= 3438, max= 3574, avg=3500.33, stdev=62.44, samples=6 00:11:35.464 lat (usec) : 250=21.66%, 500=78.20%, 750=0.06%, 1000=0.02% 00:11:35.464 lat (msec) : 2=0.03%, 4=0.03% 00:11:35.464 cpu : usr=1.10%, sys=4.51%, ctx=12699, majf=0, minf=1 00:11:35.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.464 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.464 issued rwts: total=12679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.464 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83432: Sat Dec 14 18:31:51 2024 00:11:35.464 read: IOPS=3726, BW=14.6MiB/s (15.3MB/s)(54.7MiB/3758msec) 00:11:35.464 slat (usec): min=11, max=15337, avg=19.34, stdev=196.11 00:11:35.464 clat (usec): min=117, max=7654, avg=247.67, stdev=173.58 00:11:35.464 lat (usec): min=130, max=15602, avg=267.01, stdev=261.96 00:11:35.464 clat percentiles (usec): 00:11:35.464 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 151], 00:11:35.464 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:11:35.464 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:11:35.464 | 99.00th=[ 330], 99.50th=[ 392], 99.90th=[ 3654], 99.95th=[ 4555], 00:11:35.464 | 99.99th=[ 6259] 00:11:35.465 bw ( KiB/s): min=12816, max=20412, per=22.84%, avg=14318.29, stdev=2715.04, samples=7 00:11:35.465 iops : min= 3204, max= 5103, avg=3579.57, stdev=678.76, samples=7 00:11:35.465 lat (usec) : 250=29.68%, 500=70.00%, 750=0.08%, 1000=0.03% 00:11:35.465 lat (msec) : 2=0.04%, 4=0.11%, 10=0.06% 00:11:35.465 cpu : usr=0.80%, sys=4.66%, ctx=14012, majf=0, minf=1 00:11:35.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 issued rwts: total=14004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.465 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83433: Sat Dec 14 18:31:51 2024 00:11:35.465 read: IOPS=5291, BW=20.7MiB/s (21.7MB/s)(66.4MiB/3213msec) 00:11:35.465 slat (usec): min=8, max=7827, avg=15.40, stdev=84.85 00:11:35.465 clat (usec): min=3, max=84511, avg=172.18, stdev=648.81 00:11:35.465 lat (usec): min=153, max=84526, avg=187.59, stdev=654.35 00:11:35.465 clat percentiles (usec): 00:11:35.465 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:11:35.465 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:11:35.465 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:11:35.465 | 99.00th=[ 223], 99.50th=[ 253], 99.90th=[ 400], 99.95th=[ 742], 00:11:35.465 | 99.99th=[ 4178] 00:11:35.465 bw ( KiB/s): min=17440, max=22344, per=34.04%, avg=21345.33, stdev=1933.69, samples=6 00:11:35.465 iops : min= 4360, max= 5586, avg=5336.33, stdev=483.42, samples=6 00:11:35.465 lat (usec) : 4=0.01%, 10=0.01%, 100=0.01%, 250=99.44%, 500=0.46% 00:11:35.465 lat (usec) : 750=0.03%, 1000=0.01% 00:11:35.465 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01%, 100=0.01% 00:11:35.465 cpu : usr=1.34%, sys=6.35%, ctx=17013, majf=0, minf=2 00:11:35.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 issued rwts: total=17001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.465 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83434: Sat Dec 14 18:31:51 2024 00:11:35.465 read: IOPS=5157, BW=20.1MiB/s (21.1MB/s)(59.5MiB/2952msec) 00:11:35.465 slat (nsec): min=12074, max=76130, avg=15592.42, stdev=4831.47 00:11:35.465 clat (usec): min=145, max=3812, avg=176.92, stdev=39.09 00:11:35.465 lat (usec): min=160, max=3827, avg=192.51, stdev=39.35 00:11:35.465 clat percentiles (usec): 00:11:35.465 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:11:35.465 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:35.465 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 00:11:35.465 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 302], 99.95th=[ 515], 00:11:35.465 | 99.99th=[ 1762] 00:11:35.465 bw ( KiB/s): min=20472, max=20944, per=33.04%, avg=20713.60, stdev=202.42, samples=5 00:11:35.465 iops : min= 5118, max= 5236, avg=5178.40, stdev=50.60, samples=5 00:11:35.465 lat (usec) : 250=99.22%, 500=0.72%, 750=0.03%, 1000=0.01% 00:11:35.465 lat (msec) : 2=0.01%, 4=0.01% 00:11:35.465 cpu : usr=1.25%, sys=6.44%, ctx=15231, majf=0, minf=2 00:11:35.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.465 issued rwts: total=15225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.465 00:11:35.465 Run status group 0 (all jobs): 00:11:35.465 READ: bw=61.2MiB/s (64.2MB/s), 14.3MiB/s-20.7MiB/s (15.0MB/s-21.7MB/s), io=230MiB (241MB), run=2952-3758msec 00:11:35.465 00:11:35.465 Disk stats (read/write): 00:11:35.465 nvme0n1: ios=12122/0, merge=0/0, ticks=3174/0, in_queue=3174, util=95.36% 00:11:35.465 nvme0n2: ios=13094/0, merge=0/0, ticks=3398/0, in_queue=3398, util=94.99% 00:11:35.465 nvme0n3: ios=16586/0, merge=0/0, ticks=2904/0, in_queue=2904, util=96.40% 00:11:35.465 nvme0n4: ios=14795/0, merge=0/0, ticks=2672/0, in_queue=2672, util=96.79% 00:11:35.465 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.465 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:35.724 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.724 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:35.982 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.982 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:36.241 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.241 18:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 83391 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:36.808 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.067 nvmf hotplug test: fio failed as expected 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:37.067 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.326 rmmod nvme_tcp 00:11:37.326 rmmod nvme_fabrics 00:11:37.326 rmmod nvme_keyring 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 82901 ']' 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 82901 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 82901 ']' 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 82901 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.326 18:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82901 00:11:37.326 killing process with pid 82901 00:11:37.326 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.326 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.326 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82901' 00:11:37.326 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 82901 00:11:37.326 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 82901 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:37.585 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:37.844 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.845 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.845 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.845 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:37.845 00:11:37.845 real 0m20.242s 00:11:37.845 user 1m15.689s 00:11:37.845 sys 0m9.417s 00:11:37.845 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.845 ************************************ 00:11:37.845 END TEST nvmf_fio_target 00:11:37.845 ************************************ 00:11:37.845 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.104 ************************************ 00:11:38.104 START TEST nvmf_bdevio 00:11:38.104 ************************************ 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:38.104 * Looking for test storage... 00:11:38.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.104 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.105 --rc genhtml_branch_coverage=1 00:11:38.105 --rc genhtml_function_coverage=1 00:11:38.105 --rc genhtml_legend=1 00:11:38.105 --rc geninfo_all_blocks=1 00:11:38.105 --rc geninfo_unexecuted_blocks=1 00:11:38.105 00:11:38.105 ' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.105 --rc genhtml_branch_coverage=1 00:11:38.105 --rc genhtml_function_coverage=1 00:11:38.105 --rc genhtml_legend=1 00:11:38.105 --rc geninfo_all_blocks=1 00:11:38.105 --rc geninfo_unexecuted_blocks=1 00:11:38.105 00:11:38.105 ' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.105 --rc genhtml_branch_coverage=1 00:11:38.105 --rc genhtml_function_coverage=1 00:11:38.105 --rc genhtml_legend=1 00:11:38.105 --rc geninfo_all_blocks=1 00:11:38.105 --rc geninfo_unexecuted_blocks=1 00:11:38.105 00:11:38.105 ' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.105 --rc genhtml_branch_coverage=1 00:11:38.105 --rc genhtml_function_coverage=1 00:11:38.105 --rc genhtml_legend=1 00:11:38.105 --rc geninfo_all_blocks=1 00:11:38.105 --rc geninfo_unexecuted_blocks=1 00:11:38.105 00:11:38.105 ' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:38.105 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:38.106 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:38.365 Cannot find device "nvmf_init_br" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:38.365 Cannot find device "nvmf_init_br2" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:38.365 Cannot find device "nvmf_tgt_br" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.365 Cannot find device "nvmf_tgt_br2" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:38.365 Cannot find device "nvmf_init_br" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:38.365 Cannot find device "nvmf_init_br2" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:38.365 Cannot find device "nvmf_tgt_br" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:38.365 Cannot find device "nvmf_tgt_br2" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:38.365 Cannot find device "nvmf_br" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:38.365 Cannot find device "nvmf_init_if" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:38.365 Cannot find device "nvmf_init_if2" 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:38.365 18:31:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:38.365 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:38.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:38.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:11:38.624 00:11:38.624 --- 10.0.0.3 ping statistics --- 00:11:38.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.624 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:38.624 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:38.624 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:38.624 00:11:38.624 --- 10.0.0.4 ping statistics --- 00:11:38.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.624 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:38.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:38.624 00:11:38.624 --- 10.0.0.1 ping statistics --- 00:11:38.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.624 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:38.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:38.624 00:11:38.624 --- 10.0.0.2 ping statistics --- 00:11:38.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.624 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:38.624 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=83828 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 83828 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 83828 ']' 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.625 18:31:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.625 [2024-12-14 18:31:54.312789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:38.625 [2024-12-14 18:31:54.312911] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.883 [2024-12-14 18:31:54.457341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.884 [2024-12-14 18:31:54.561655] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.884 [2024-12-14 18:31:54.561727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.884 [2024-12-14 18:31:54.561791] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.884 [2024-12-14 18:31:54.561804] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.884 [2024-12-14 18:31:54.561814] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.884 [2024-12-14 18:31:54.562005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.884 [2024-12-14 18:31:54.562544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.884 [2024-12-14 18:31:54.562713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.884 [2024-12-14 18:31:54.562715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.819 [2024-12-14 18:31:55.353434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.819 Malloc0 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.819 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-12-14 18:31:55.422031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:39.820 { 00:11:39.820 "params": { 00:11:39.820 "name": "Nvme$subsystem", 00:11:39.820 "trtype": "$TEST_TRANSPORT", 00:11:39.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.820 "adrfam": "ipv4", 00:11:39.820 "trsvcid": "$NVMF_PORT", 00:11:39.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.820 "hdgst": ${hdgst:-false}, 00:11:39.820 "ddgst": ${ddgst:-false} 00:11:39.820 }, 00:11:39.820 "method": "bdev_nvme_attach_controller" 00:11:39.820 } 00:11:39.820 EOF 00:11:39.820 )") 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:39.820 18:31:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:39.820 "params": { 00:11:39.820 "name": "Nvme1", 00:11:39.820 "trtype": "tcp", 00:11:39.820 "traddr": "10.0.0.3", 00:11:39.820 "adrfam": "ipv4", 00:11:39.820 "trsvcid": "4420", 00:11:39.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.820 "hdgst": false, 00:11:39.820 "ddgst": false 00:11:39.820 }, 00:11:39.820 "method": "bdev_nvme_attach_controller" 00:11:39.820 }' 00:11:39.820 [2024-12-14 18:31:55.481277] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:39.820 [2024-12-14 18:31:55.481938] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83882 ] 00:11:40.078 [2024-12-14 18:31:55.619975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.078 [2024-12-14 18:31:55.715670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.078 [2024-12-14 18:31:55.715800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.078 [2024-12-14 18:31:55.716136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.337 I/O targets: 00:11:40.337 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:40.337 00:11:40.337 00:11:40.337 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.337 http://cunit.sourceforge.net/ 00:11:40.337 00:11:40.337 00:11:40.337 Suite: bdevio tests on: Nvme1n1 00:11:40.337 Test: blockdev write read block ...passed 00:11:40.337 Test: blockdev write zeroes read block ...passed 00:11:40.337 Test: blockdev write zeroes read no split ...passed 00:11:40.337 Test: blockdev write zeroes read split ...passed 00:11:40.337 Test: blockdev write zeroes read split partial ...passed 00:11:40.337 Test: blockdev reset ...[2024-12-14 18:31:56.059897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:40.337 [2024-12-14 18:31:56.060164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cb320 (9): Bad file descriptor 00:11:40.337 [2024-12-14 18:31:56.078640] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:40.337 passed 00:11:40.337 Test: blockdev write read 8 blocks ...passed 00:11:40.337 Test: blockdev write read size > 128k ...passed 00:11:40.337 Test: blockdev write read invalid size ...passed 00:11:40.596 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.596 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.596 Test: blockdev write read max offset ...passed 00:11:40.596 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.596 Test: blockdev writev readv 8 blocks ...passed 00:11:40.596 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.596 Test: blockdev writev readv block ...passed 00:11:40.596 Test: blockdev writev readv size > 128k ...passed 00:11:40.596 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.596 Test: blockdev comparev and writev ...[2024-12-14 18:31:56.254349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.254416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.254437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.254873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.254901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.254918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.254936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.255380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.255397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.255407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.255815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.255843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.255860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.596 [2024-12-14 18:31:56.255870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:40.596 passed 00:11:40.596 Test: blockdev nvme passthru rw ...passed 00:11:40.596 Test: blockdev nvme passthru vendor specific ...[2024-12-14 18:31:56.339002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.596 [2024-12-14 18:31:56.339036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.339190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.596 [2024-12-14 18:31:56.339207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.339333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.596 [2024-12-14 18:31:56.339350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:40.596 [2024-12-14 18:31:56.339472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.596 [2024-12-14 18:31:56.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:40.596 passed 00:11:40.855 Test: blockdev nvme admin passthru ...passed 00:11:40.855 Test: blockdev copy ...passed 00:11:40.855 00:11:40.855 Run Summary: Type Total Ran Passed Failed Inactive 00:11:40.855 suites 1 1 n/a 0 0 00:11:40.855 tests 23 23 23 0 0 00:11:40.855 asserts 152 152 152 0 n/a 00:11:40.855 00:11:40.855 Elapsed time = 0.903 seconds 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.114 rmmod nvme_tcp 00:11:41.114 rmmod nvme_fabrics 00:11:41.114 rmmod nvme_keyring 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 83828 ']' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 83828 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 83828 ']' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 83828 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83828 00:11:41.114 killing process with pid 83828 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83828' 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 83828 00:11:41.114 18:31:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 83828 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:41.373 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:41.632 00:11:41.632 real 0m3.756s 00:11:41.632 user 0m12.481s 00:11:41.632 sys 0m1.120s 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.632 ************************************ 00:11:41.632 18:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.632 END TEST nvmf_bdevio 00:11:41.632 ************************************ 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:41.891 ************************************ 00:11:41.891 END TEST nvmf_target_core 00:11:41.891 ************************************ 00:11:41.891 00:11:41.891 real 3m38.446s 00:11:41.891 user 11m20.799s 00:11:41.891 sys 1m4.695s 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:41.891 18:31:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:41.891 18:31:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.891 18:31:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.891 18:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.891 ************************************ 00:11:41.891 START TEST nvmf_target_extra 00:11:41.891 ************************************ 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:41.891 * Looking for test storage... 00:11:41.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:41.891 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.152 --rc genhtml_branch_coverage=1 00:11:42.152 --rc genhtml_function_coverage=1 00:11:42.152 --rc genhtml_legend=1 00:11:42.152 --rc geninfo_all_blocks=1 00:11:42.152 --rc geninfo_unexecuted_blocks=1 00:11:42.152 00:11:42.152 ' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.152 --rc genhtml_branch_coverage=1 00:11:42.152 --rc genhtml_function_coverage=1 00:11:42.152 --rc genhtml_legend=1 00:11:42.152 --rc geninfo_all_blocks=1 00:11:42.152 --rc geninfo_unexecuted_blocks=1 00:11:42.152 00:11:42.152 ' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.152 --rc genhtml_branch_coverage=1 00:11:42.152 --rc genhtml_function_coverage=1 00:11:42.152 --rc genhtml_legend=1 00:11:42.152 --rc geninfo_all_blocks=1 00:11:42.152 --rc geninfo_unexecuted_blocks=1 00:11:42.152 00:11:42.152 ' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.152 --rc genhtml_branch_coverage=1 00:11:42.152 --rc genhtml_function_coverage=1 00:11:42.152 --rc genhtml_legend=1 00:11:42.152 --rc geninfo_all_blocks=1 00:11:42.152 --rc geninfo_unexecuted_blocks=1 00:11:42.152 00:11:42.152 ' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.152 ************************************ 00:11:42.152 START TEST nvmf_example 00:11:42.152 ************************************ 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.152 * Looking for test storage... 00:11:42.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.152 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.153 --rc genhtml_branch_coverage=1 00:11:42.153 --rc genhtml_function_coverage=1 00:11:42.153 --rc genhtml_legend=1 00:11:42.153 --rc geninfo_all_blocks=1 00:11:42.153 --rc geninfo_unexecuted_blocks=1 00:11:42.153 00:11:42.153 ' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.153 --rc genhtml_branch_coverage=1 00:11:42.153 --rc genhtml_function_coverage=1 00:11:42.153 --rc genhtml_legend=1 00:11:42.153 --rc geninfo_all_blocks=1 00:11:42.153 --rc geninfo_unexecuted_blocks=1 00:11:42.153 00:11:42.153 ' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.153 --rc genhtml_branch_coverage=1 00:11:42.153 --rc genhtml_function_coverage=1 00:11:42.153 --rc genhtml_legend=1 00:11:42.153 --rc geninfo_all_blocks=1 00:11:42.153 --rc geninfo_unexecuted_blocks=1 00:11:42.153 00:11:42.153 ' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.153 --rc genhtml_branch_coverage=1 00:11:42.153 --rc genhtml_function_coverage=1 00:11:42.153 --rc genhtml_legend=1 00:11:42.153 --rc geninfo_all_blocks=1 00:11:42.153 --rc geninfo_unexecuted_blocks=1 00:11:42.153 00:11:42.153 ' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.153 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:42.412 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.412 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:42.412 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.412 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:42.413 Cannot find device "nvmf_init_br" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:42.413 Cannot find device "nvmf_init_br2" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:42.413 Cannot find device "nvmf_tgt_br" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.413 Cannot find device "nvmf_tgt_br2" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:42.413 Cannot find device "nvmf_init_br" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:42.413 Cannot find device "nvmf_init_br2" 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:11:42.413 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:42.413 Cannot find device "nvmf_tgt_br" 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:42.413 Cannot find device "nvmf_tgt_br2" 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:42.413 Cannot find device "nvmf_br" 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:42.413 Cannot find device "nvmf_init_if" 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:42.413 Cannot find device "nvmf_init_if2" 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.413 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.672 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.672 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.672 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:42.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:11:42.673 00:11:42.673 --- 10.0.0.3 ping statistics --- 00:11:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.673 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:42.673 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:42.673 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:11:42.673 00:11:42.673 --- 10.0.0.4 ping statistics --- 00:11:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.673 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:42.673 00:11:42.673 --- 10.0.0.1 ping statistics --- 00:11:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.673 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:42.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:42.673 00:11:42.673 --- 10.0.0.2 ping statistics --- 00:11:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.673 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # return 0 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=84181 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 84181 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 84181 ']' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.673 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.053 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:44.054 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:56.326 Initializing NVMe Controllers 00:11:56.326 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:56.326 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:56.326 Initialization complete. Launching workers. 00:11:56.326 ======================================================== 00:11:56.326 Latency(us) 00:11:56.326 Device Information : IOPS MiB/s Average min max 00:11:56.326 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16560.87 64.69 3864.18 659.62 20161.12 00:11:56.326 ======================================================== 00:11:56.326 Total : 16560.87 64.69 3864.18 659.62 20161.12 00:11:56.326 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.326 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.326 rmmod nvme_tcp 00:11:56.326 rmmod nvme_fabrics 00:11:56.326 rmmod nvme_keyring 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 84181 ']' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 84181 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 84181 ']' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 84181 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84181 00:11:56.326 killing process with pid 84181 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84181' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 84181 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 84181 00:11:56.326 nvmf threads initialize successfully 00:11:56.326 bdev subsystem init successfully 00:11:56.326 created a nvmf target service 00:11:56.326 create targets's poll groups done 00:11:56.326 all subsystems of target started 00:11:56.326 nvmf target is running 00:11:56.326 all subsystems of target stopped 00:11:56.326 destroy targets's poll groups done 00:11:56.326 destroyed the nvmf target service 00:11:56.326 bdev subsystem finish successfully 00:11:56.326 nvmf threads destroy successfully 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.326 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.327 00:11:56.327 real 0m12.978s 00:11:56.327 user 0m45.225s 00:11:56.327 sys 0m2.225s 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.327 ************************************ 00:11:56.327 END TEST nvmf_example 00:11:56.327 ************************************ 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.327 ************************************ 00:11:56.327 START TEST nvmf_filesystem 00:11:56.327 ************************************ 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.327 * Looking for test storage... 00:11:56.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.327 --rc genhtml_branch_coverage=1 00:11:56.327 --rc genhtml_function_coverage=1 00:11:56.327 --rc genhtml_legend=1 00:11:56.327 --rc geninfo_all_blocks=1 00:11:56.327 --rc geninfo_unexecuted_blocks=1 00:11:56.327 00:11:56.327 ' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.327 --rc genhtml_branch_coverage=1 00:11:56.327 --rc genhtml_function_coverage=1 00:11:56.327 --rc genhtml_legend=1 00:11:56.327 --rc geninfo_all_blocks=1 00:11:56.327 --rc geninfo_unexecuted_blocks=1 00:11:56.327 00:11:56.327 ' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.327 --rc genhtml_branch_coverage=1 00:11:56.327 --rc genhtml_function_coverage=1 00:11:56.327 --rc genhtml_legend=1 00:11:56.327 --rc geninfo_all_blocks=1 00:11:56.327 --rc geninfo_unexecuted_blocks=1 00:11:56.327 00:11:56.327 ' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.327 --rc genhtml_branch_coverage=1 00:11:56.327 --rc genhtml_function_coverage=1 00:11:56.327 --rc genhtml_legend=1 00:11:56.327 --rc geninfo_all_blocks=1 00:11:56.327 --rc geninfo_unexecuted_blocks=1 00:11:56.327 00:11:56.327 ' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:56.327 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:56.328 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:56.328 #define SPDK_CONFIG_H 00:11:56.328 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:56.328 #define SPDK_CONFIG_APPS 1 00:11:56.328 #define SPDK_CONFIG_ARCH native 00:11:56.328 #undef SPDK_CONFIG_ASAN 00:11:56.328 #define SPDK_CONFIG_AVAHI 1 00:11:56.328 #undef SPDK_CONFIG_CET 00:11:56.328 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:56.328 #define SPDK_CONFIG_COVERAGE 1 00:11:56.328 #define SPDK_CONFIG_CROSS_PREFIX 00:11:56.328 #undef SPDK_CONFIG_CRYPTO 00:11:56.328 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:56.328 #undef SPDK_CONFIG_CUSTOMOCF 00:11:56.328 #undef SPDK_CONFIG_DAOS 00:11:56.328 #define SPDK_CONFIG_DAOS_DIR 00:11:56.328 #define SPDK_CONFIG_DEBUG 1 00:11:56.328 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:56.328 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:11:56.328 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:11:56.328 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:11:56.328 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:56.328 #undef SPDK_CONFIG_DPDK_UADK 00:11:56.328 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:56.328 #define SPDK_CONFIG_EXAMPLES 1 00:11:56.328 #undef SPDK_CONFIG_FC 00:11:56.328 #define SPDK_CONFIG_FC_PATH 00:11:56.328 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:56.328 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:56.328 #define SPDK_CONFIG_FSDEV 1 00:11:56.328 #undef SPDK_CONFIG_FUSE 00:11:56.328 #undef SPDK_CONFIG_FUZZER 00:11:56.328 #define SPDK_CONFIG_FUZZER_LIB 00:11:56.328 #define SPDK_CONFIG_GOLANG 1 00:11:56.328 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:56.328 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:56.328 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:56.328 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:56.328 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:56.328 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:56.328 #undef SPDK_CONFIG_HAVE_LZ4 00:11:56.328 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:56.328 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:56.328 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:56.328 #define SPDK_CONFIG_IDXD 1 00:11:56.328 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:56.328 #undef SPDK_CONFIG_IPSEC_MB 00:11:56.328 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:56.328 #define SPDK_CONFIG_ISAL 1 00:11:56.328 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:56.328 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:56.329 #define SPDK_CONFIG_LIBDIR 00:11:56.329 #undef SPDK_CONFIG_LTO 00:11:56.329 #define SPDK_CONFIG_MAX_LCORES 128 00:11:56.329 #define SPDK_CONFIG_NVME_CUSE 1 00:11:56.329 #undef SPDK_CONFIG_OCF 00:11:56.329 #define SPDK_CONFIG_OCF_PATH 00:11:56.329 #define SPDK_CONFIG_OPENSSL_PATH 00:11:56.329 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:56.329 #define SPDK_CONFIG_PGO_DIR 00:11:56.329 #undef SPDK_CONFIG_PGO_USE 00:11:56.329 #define SPDK_CONFIG_PREFIX /usr/local 00:11:56.329 #undef SPDK_CONFIG_RAID5F 00:11:56.329 #undef SPDK_CONFIG_RBD 00:11:56.329 #define SPDK_CONFIG_RDMA 1 00:11:56.329 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:56.329 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:56.329 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:56.329 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:56.329 #define SPDK_CONFIG_SHARED 1 00:11:56.329 #undef SPDK_CONFIG_SMA 00:11:56.329 #define SPDK_CONFIG_TESTS 1 00:11:56.329 #undef SPDK_CONFIG_TSAN 00:11:56.329 #define SPDK_CONFIG_UBLK 1 00:11:56.329 #define SPDK_CONFIG_UBSAN 1 00:11:56.329 #undef SPDK_CONFIG_UNIT_TESTS 00:11:56.329 #undef SPDK_CONFIG_URING 00:11:56.329 #define SPDK_CONFIG_URING_PATH 00:11:56.329 #undef SPDK_CONFIG_URING_ZNS 00:11:56.329 #define SPDK_CONFIG_USDT 1 00:11:56.329 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:56.329 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:56.329 #define SPDK_CONFIG_VFIO_USER 1 00:11:56.329 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:56.329 #define SPDK_CONFIG_VHOST 1 00:11:56.329 #define SPDK_CONFIG_VIRTIO 1 00:11:56.329 #undef SPDK_CONFIG_VTUNE 00:11:56.329 #define SPDK_CONFIG_VTUNE_DIR 00:11:56.329 #define SPDK_CONFIG_WERROR 1 00:11:56.329 #define SPDK_CONFIG_WPDK_DIR 00:11:56.329 #undef SPDK_CONFIG_XNVME 00:11:56.329 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:56.329 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:56.330 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:56.330 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 84458 ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 84458 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.XzEKni 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.XzEKni/tests/target /tmp/spdk.XzEKni 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:56.331 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13249048576 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6336040960 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256398336 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266429440 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13249048576 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6336040960 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266277888 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266429440 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=151552 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253273600 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253285888 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97212805120 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=2489974784 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:56.332 * Looking for test storage... 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13249048576 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.332 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:56.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.333 --rc genhtml_branch_coverage=1 00:11:56.333 --rc genhtml_function_coverage=1 00:11:56.333 --rc genhtml_legend=1 00:11:56.333 --rc geninfo_all_blocks=1 00:11:56.333 --rc geninfo_unexecuted_blocks=1 00:11:56.333 00:11:56.333 ' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:56.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.333 --rc genhtml_branch_coverage=1 00:11:56.333 --rc genhtml_function_coverage=1 00:11:56.333 --rc genhtml_legend=1 00:11:56.333 --rc geninfo_all_blocks=1 00:11:56.333 --rc geninfo_unexecuted_blocks=1 00:11:56.333 00:11:56.333 ' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:56.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.333 --rc genhtml_branch_coverage=1 00:11:56.333 --rc genhtml_function_coverage=1 00:11:56.333 --rc genhtml_legend=1 00:11:56.333 --rc geninfo_all_blocks=1 00:11:56.333 --rc geninfo_unexecuted_blocks=1 00:11:56.333 00:11:56.333 ' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:56.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.333 --rc genhtml_branch_coverage=1 00:11:56.333 --rc genhtml_function_coverage=1 00:11:56.333 --rc genhtml_legend=1 00:11:56.333 --rc geninfo_all_blocks=1 00:11:56.333 --rc geninfo_unexecuted_blocks=1 00:11:56.333 00:11:56.333 ' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:56.333 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:56.334 Cannot find device "nvmf_init_br" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:56.334 Cannot find device "nvmf_init_br2" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:56.334 Cannot find device "nvmf_tgt_br" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.334 Cannot find device "nvmf_tgt_br2" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:56.334 Cannot find device "nvmf_init_br" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:56.334 Cannot find device "nvmf_init_br2" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:56.334 Cannot find device "nvmf_tgt_br" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:56.334 Cannot find device "nvmf_tgt_br2" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:56.334 Cannot find device "nvmf_br" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:56.334 Cannot find device "nvmf_init_if" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:56.334 Cannot find device "nvmf_init_if2" 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:56.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:56.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:56.334 00:11:56.334 --- 10.0.0.3 ping statistics --- 00:11:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.334 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:56.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:56.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:56.334 00:11:56.334 --- 10.0.0.4 ping statistics --- 00:11:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.334 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:56.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:56.334 00:11:56.334 --- 10.0.0.1 ping statistics --- 00:11:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.334 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:56.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:11:56.334 00:11:56.334 --- 10.0.0.2 ping statistics --- 00:11:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.334 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # return 0 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.334 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.335 ************************************ 00:11:56.335 START TEST nvmf_filesystem_no_in_capsule 00:11:56.335 ************************************ 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=84647 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 84647 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 84647 ']' 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:56.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:56.335 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.335 [2024-12-14 18:32:11.796957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:56.335 [2024-12-14 18:32:11.797126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.335 [2024-12-14 18:32:11.943287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.593 [2024-12-14 18:32:12.103462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.593 [2024-12-14 18:32:12.103560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.593 [2024-12-14 18:32:12.103587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.593 [2024-12-14 18:32:12.103622] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.593 [2024-12-14 18:32:12.103634] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.593 [2024-12-14 18:32:12.104560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.593 [2024-12-14 18:32:12.104780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.593 [2024-12-14 18:32:12.104917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.593 [2024-12-14 18:32:12.104926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 [2024-12-14 18:32:12.967509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.528 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 Malloc1 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.528 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.788 [2024-12-14 18:32:13.285134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:57.788 { 00:11:57.788 "aliases": [ 00:11:57.788 "6de83fe2-68dc-4bf4-9b8d-9e540a90a374" 00:11:57.788 ], 00:11:57.788 "assigned_rate_limits": { 00:11:57.788 "r_mbytes_per_sec": 0, 00:11:57.788 "rw_ios_per_sec": 0, 00:11:57.788 "rw_mbytes_per_sec": 0, 00:11:57.788 "w_mbytes_per_sec": 0 00:11:57.788 }, 00:11:57.788 "block_size": 512, 00:11:57.788 "claim_type": "exclusive_write", 00:11:57.788 "claimed": true, 00:11:57.788 "driver_specific": {}, 00:11:57.788 "memory_domains": [ 00:11:57.788 { 00:11:57.788 "dma_device_id": "system", 00:11:57.788 "dma_device_type": 1 00:11:57.788 }, 00:11:57.788 { 00:11:57.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.788 "dma_device_type": 2 00:11:57.788 } 00:11:57.788 ], 00:11:57.788 "name": "Malloc1", 00:11:57.788 "num_blocks": 1048576, 00:11:57.788 "product_name": "Malloc disk", 00:11:57.788 "supported_io_types": { 00:11:57.788 "abort": true, 00:11:57.788 "compare": false, 00:11:57.788 "compare_and_write": false, 00:11:57.788 "copy": true, 00:11:57.788 "flush": true, 00:11:57.788 "get_zone_info": false, 00:11:57.788 "nvme_admin": false, 00:11:57.788 "nvme_io": false, 00:11:57.788 "nvme_io_md": false, 00:11:57.788 "nvme_iov_md": false, 00:11:57.788 "read": true, 00:11:57.788 "reset": true, 00:11:57.788 "seek_data": false, 00:11:57.788 "seek_hole": false, 00:11:57.788 "unmap": true, 00:11:57.788 "write": true, 00:11:57.788 "write_zeroes": true, 00:11:57.788 "zcopy": true, 00:11:57.788 "zone_append": false, 00:11:57.788 "zone_management": false 00:11:57.788 }, 00:11:57.788 "uuid": "6de83fe2-68dc-4bf4-9b8d-9e540a90a374", 00:11:57.788 "zoned": false 00:11:57.788 } 00:11:57.788 ]' 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:57.788 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:58.047 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.047 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.047 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.047 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:58.047 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:59.951 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:00.210 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:00.210 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 ************************************ 00:12:01.146 START TEST filesystem_ext4 00:12:01.146 ************************************ 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:01.146 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:01.146 mke2fs 1.47.0 (5-Feb-2023) 00:12:01.405 Discarding device blocks: 0/522240 done 00:12:01.405 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:01.405 Filesystem UUID: 1512e923-6caf-4354-ada8-2729068d5fbb 00:12:01.405 Superblock backups stored on blocks: 00:12:01.405 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:01.405 00:12:01.405 Allocating group tables: 0/64 done 00:12:01.405 Writing inode tables: 0/64 done 00:12:01.405 Creating journal (8192 blocks): done 00:12:01.405 Writing superblocks and filesystem accounting information: 0/64 done 00:12:01.405 00:12:01.405 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:01.405 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.671 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 84647 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.931 ************************************ 00:12:06.931 END TEST filesystem_ext4 00:12:06.931 ************************************ 00:12:06.931 00:12:06.931 real 0m5.705s 00:12:06.931 user 0m0.023s 00:12:06.931 sys 0m0.071s 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.931 ************************************ 00:12:06.931 START TEST filesystem_btrfs 00:12:06.931 ************************************ 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.931 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:07.190 btrfs-progs v6.8.1 00:12:07.190 See https://btrfs.readthedocs.io for more information. 00:12:07.190 00:12:07.190 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:07.190 NOTE: several default settings have changed in version 5.15, please make sure 00:12:07.190 this does not affect your deployments: 00:12:07.190 - DUP for metadata (-m dup) 00:12:07.190 - enabled no-holes (-O no-holes) 00:12:07.190 - enabled free-space-tree (-R free-space-tree) 00:12:07.190 00:12:07.190 Label: (null) 00:12:07.190 UUID: 43065bb6-0daf-44a0-bb5c-beee0b085009 00:12:07.190 Node size: 16384 00:12:07.190 Sector size: 4096 (CPU page size: 4096) 00:12:07.190 Filesystem size: 510.00MiB 00:12:07.190 Block group profiles: 00:12:07.190 Data: single 8.00MiB 00:12:07.190 Metadata: DUP 32.00MiB 00:12:07.190 System: DUP 8.00MiB 00:12:07.190 SSD detected: yes 00:12:07.190 Zoned device: no 00:12:07.190 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:07.190 Checksum: crc32c 00:12:07.190 Number of devices: 1 00:12:07.190 Devices: 00:12:07.190 ID SIZE PATH 00:12:07.190 1 510.00MiB /dev/nvme0n1p1 00:12:07.190 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 84647 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.190 ************************************ 00:12:07.190 END TEST filesystem_btrfs 00:12:07.190 ************************************ 00:12:07.190 00:12:07.190 real 0m0.277s 00:12:07.190 user 0m0.015s 00:12:07.190 sys 0m0.073s 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.190 ************************************ 00:12:07.190 START TEST filesystem_xfs 00:12:07.190 ************************************ 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:07.190 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:07.449 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:07.449 = sectsz=512 attr=2, projid32bit=1 00:12:07.449 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:07.449 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:07.449 data = bsize=4096 blocks=130560, imaxpct=25 00:12:07.449 = sunit=0 swidth=0 blks 00:12:07.449 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:07.449 log =internal log bsize=4096 blocks=16384, version=2 00:12:07.449 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:07.449 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:08.017 Discarding blocks...Done. 00:12:08.017 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:08.017 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 84647 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.552 ************************************ 00:12:10.552 END TEST filesystem_xfs 00:12:10.552 ************************************ 00:12:10.552 00:12:10.552 real 0m3.215s 00:12:10.552 user 0m0.023s 00:12:10.552 sys 0m0.063s 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 84647 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 84647 ']' 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 84647 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.552 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84647 00:12:10.811 killing process with pid 84647 00:12:10.811 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.811 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.811 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84647' 00:12:10.811 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 84647 00:12:10.811 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 84647 00:12:11.378 ************************************ 00:12:11.378 END TEST nvmf_filesystem_no_in_capsule 00:12:11.378 ************************************ 00:12:11.378 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:11.378 00:12:11.378 real 0m15.168s 00:12:11.378 user 0m58.029s 00:12:11.378 sys 0m1.978s 00:12:11.378 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.378 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.378 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 ************************************ 00:12:11.379 START TEST nvmf_filesystem_in_capsule 00:12:11.379 ************************************ 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=85025 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 85025 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 85025 ']' 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.379 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 [2024-12-14 18:32:27.007949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:11.379 [2024-12-14 18:32:27.008232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.638 [2024-12-14 18:32:27.143422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.638 [2024-12-14 18:32:27.225266] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.638 [2024-12-14 18:32:27.225691] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.638 [2024-12-14 18:32:27.225963] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.638 [2024-12-14 18:32:27.226183] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.638 [2024-12-14 18:32:27.226279] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.638 [2024-12-14 18:32:27.226505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.638 [2024-12-14 18:32:27.226670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.638 [2024-12-14 18:32:27.226730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.638 [2024-12-14 18:32:27.226731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.576 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.576 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:12.576 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:12.576 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.576 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 [2024-12-14 18:32:28.004194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 [2024-12-14 18:32:28.242446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.576 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:12.576 { 00:12:12.576 "aliases": [ 00:12:12.576 "2cdcdc4c-52a7-4c5b-9610-86fd5457b04f" 00:12:12.576 ], 00:12:12.576 "assigned_rate_limits": { 00:12:12.576 "r_mbytes_per_sec": 0, 00:12:12.576 "rw_ios_per_sec": 0, 00:12:12.576 "rw_mbytes_per_sec": 0, 00:12:12.576 "w_mbytes_per_sec": 0 00:12:12.576 }, 00:12:12.576 "block_size": 512, 00:12:12.576 "claim_type": "exclusive_write", 00:12:12.576 "claimed": true, 00:12:12.576 "driver_specific": {}, 00:12:12.576 "memory_domains": [ 00:12:12.576 { 00:12:12.576 "dma_device_id": "system", 00:12:12.576 "dma_device_type": 1 00:12:12.576 }, 00:12:12.576 { 00:12:12.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.576 "dma_device_type": 2 00:12:12.576 } 00:12:12.576 ], 00:12:12.576 "name": "Malloc1", 00:12:12.576 "num_blocks": 1048576, 00:12:12.576 "product_name": "Malloc disk", 00:12:12.576 "supported_io_types": { 00:12:12.576 "abort": true, 00:12:12.576 "compare": false, 00:12:12.576 "compare_and_write": false, 00:12:12.576 "copy": true, 00:12:12.576 "flush": true, 00:12:12.576 "get_zone_info": false, 00:12:12.576 "nvme_admin": false, 00:12:12.576 "nvme_io": false, 00:12:12.576 "nvme_io_md": false, 00:12:12.576 "nvme_iov_md": false, 00:12:12.576 "read": true, 00:12:12.576 "reset": true, 00:12:12.576 "seek_data": false, 00:12:12.576 "seek_hole": false, 00:12:12.576 "unmap": true, 00:12:12.576 "write": true, 00:12:12.576 "write_zeroes": true, 00:12:12.576 "zcopy": true, 00:12:12.576 "zone_append": false, 00:12:12.576 "zone_management": false 00:12:12.576 }, 00:12:12.576 "uuid": "2cdcdc4c-52a7-4c5b-9610-86fd5457b04f", 00:12:12.577 "zoned": false 00:12:12.577 } 00:12:12.577 ]' 00:12:12.577 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.836 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:15.390 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.327 ************************************ 00:12:16.327 START TEST filesystem_in_capsule_ext4 00:12:16.327 ************************************ 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:16.327 mke2fs 1.47.0 (5-Feb-2023) 00:12:16.327 Discarding device blocks: 0/522240 done 00:12:16.327 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:16.327 Filesystem UUID: 263ae75b-fa19-4b67-a5fb-48947b0335c0 00:12:16.327 Superblock backups stored on blocks: 00:12:16.327 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:16.327 00:12:16.327 Allocating group tables: 0/64 done 00:12:16.327 Writing inode tables: 0/64 done 00:12:16.327 Creating journal (8192 blocks): done 00:12:16.327 Writing superblocks and filesystem accounting information: 0/64 done 00:12:16.327 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:16.327 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 85025 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.894 ************************************ 00:12:22.894 END TEST filesystem_in_capsule_ext4 00:12:22.894 ************************************ 00:12:22.894 00:12:22.894 real 0m5.679s 00:12:22.894 user 0m0.026s 00:12:22.894 sys 0m0.068s 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.894 ************************************ 00:12:22.894 START TEST filesystem_in_capsule_btrfs 00:12:22.894 ************************************ 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:22.894 btrfs-progs v6.8.1 00:12:22.894 See https://btrfs.readthedocs.io for more information. 00:12:22.894 00:12:22.894 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:22.894 NOTE: several default settings have changed in version 5.15, please make sure 00:12:22.894 this does not affect your deployments: 00:12:22.894 - DUP for metadata (-m dup) 00:12:22.894 - enabled no-holes (-O no-holes) 00:12:22.894 - enabled free-space-tree (-R free-space-tree) 00:12:22.894 00:12:22.894 Label: (null) 00:12:22.894 UUID: fd32499f-38e9-4566-86bd-0fe1c6a22f97 00:12:22.894 Node size: 16384 00:12:22.894 Sector size: 4096 (CPU page size: 4096) 00:12:22.894 Filesystem size: 510.00MiB 00:12:22.894 Block group profiles: 00:12:22.894 Data: single 8.00MiB 00:12:22.894 Metadata: DUP 32.00MiB 00:12:22.894 System: DUP 8.00MiB 00:12:22.894 SSD detected: yes 00:12:22.894 Zoned device: no 00:12:22.894 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:22.894 Checksum: crc32c 00:12:22.894 Number of devices: 1 00:12:22.894 Devices: 00:12:22.894 ID SIZE PATH 00:12:22.894 1 510.00MiB /dev/nvme0n1p1 00:12:22.894 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 85025 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.894 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.894 00:12:22.894 real 0m0.282s 00:12:22.894 user 0m0.016s 00:12:22.894 sys 0m0.074s 00:12:22.894 ************************************ 00:12:22.894 END TEST filesystem_in_capsule_btrfs 00:12:22.894 ************************************ 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.895 ************************************ 00:12:22.895 START TEST filesystem_in_capsule_xfs 00:12:22.895 ************************************ 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:22.895 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:22.895 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:22.895 = sectsz=512 attr=2, projid32bit=1 00:12:22.895 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:22.895 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:22.895 data = bsize=4096 blocks=130560, imaxpct=25 00:12:22.895 = sunit=0 swidth=0 blks 00:12:22.895 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:22.895 log =internal log bsize=4096 blocks=16384, version=2 00:12:22.895 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:22.895 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:23.154 Discarding blocks...Done. 00:12:23.154 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:23.154 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 85025 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.058 ************************************ 00:12:25.058 END TEST filesystem_in_capsule_xfs 00:12:25.058 ************************************ 00:12:25.058 00:12:25.058 real 0m2.673s 00:12:25.058 user 0m0.026s 00:12:25.058 sys 0m0.064s 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 85025 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 85025 ']' 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 85025 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85025 00:12:25.058 killing process with pid 85025 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85025' 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 85025 00:12:25.058 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 85025 00:12:25.625 ************************************ 00:12:25.625 END TEST nvmf_filesystem_in_capsule 00:12:25.625 ************************************ 00:12:25.625 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:25.625 00:12:25.626 real 0m14.367s 00:12:25.626 user 0m55.276s 00:12:25.626 sys 0m1.713s 00:12:25.626 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.626 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.626 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:25.626 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:25.626 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.884 rmmod nvme_tcp 00:12:25.884 rmmod nvme_fabrics 00:12:25.884 rmmod nvme_keyring 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:25.884 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.885 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:12:26.144 00:12:26.144 real 0m30.939s 00:12:26.144 user 1m53.788s 00:12:26.144 sys 0m4.283s 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.144 ************************************ 00:12:26.144 END TEST nvmf_filesystem 00:12:26.144 ************************************ 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.144 ************************************ 00:12:26.144 START TEST nvmf_target_discovery 00:12:26.144 ************************************ 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.144 * Looking for test storage... 00:12:26.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.144 --rc genhtml_branch_coverage=1 00:12:26.144 --rc genhtml_function_coverage=1 00:12:26.144 --rc genhtml_legend=1 00:12:26.144 --rc geninfo_all_blocks=1 00:12:26.144 --rc geninfo_unexecuted_blocks=1 00:12:26.144 00:12:26.144 ' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.144 --rc genhtml_branch_coverage=1 00:12:26.144 --rc genhtml_function_coverage=1 00:12:26.144 --rc genhtml_legend=1 00:12:26.144 --rc geninfo_all_blocks=1 00:12:26.144 --rc geninfo_unexecuted_blocks=1 00:12:26.144 00:12:26.144 ' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.144 --rc genhtml_branch_coverage=1 00:12:26.144 --rc genhtml_function_coverage=1 00:12:26.144 --rc genhtml_legend=1 00:12:26.144 --rc geninfo_all_blocks=1 00:12:26.144 --rc geninfo_unexecuted_blocks=1 00:12:26.144 00:12:26.144 ' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.144 --rc genhtml_branch_coverage=1 00:12:26.144 --rc genhtml_function_coverage=1 00:12:26.144 --rc genhtml_legend=1 00:12:26.144 --rc geninfo_all_blocks=1 00:12:26.144 --rc geninfo_unexecuted_blocks=1 00:12:26.144 00:12:26.144 ' 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.144 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.404 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:26.405 Cannot find device "nvmf_init_br" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:26.405 Cannot find device "nvmf_init_br2" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:26.405 Cannot find device "nvmf_tgt_br" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.405 Cannot find device "nvmf_tgt_br2" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:26.405 Cannot find device "nvmf_init_br" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:26.405 Cannot find device "nvmf_init_br2" 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:12:26.405 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:26.405 Cannot find device "nvmf_tgt_br" 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:26.405 Cannot find device "nvmf_tgt_br2" 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:26.405 Cannot find device "nvmf_br" 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:26.405 Cannot find device "nvmf_init_if" 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:26.405 Cannot find device "nvmf_init_if2" 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.405 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:26.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:26.665 00:12:26.665 --- 10.0.0.3 ping statistics --- 00:12:26.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.665 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:26.665 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:26.665 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:12:26.665 00:12:26.665 --- 10.0.0.4 ping statistics --- 00:12:26.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.665 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:26.665 00:12:26.665 --- 10.0.0.1 ping statistics --- 00:12:26.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.665 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:26.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:26.665 00:12:26.665 --- 10.0.0.2 ping statistics --- 00:12:26.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.665 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # return 0 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=85615 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 85615 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 85615 ']' 00:12:26.665 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.666 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.666 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.666 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.666 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.666 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.924 [2024-12-14 18:32:42.423817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:26.924 [2024-12-14 18:32:42.423923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.924 [2024-12-14 18:32:42.564376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.924 [2024-12-14 18:32:42.647678] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.924 [2024-12-14 18:32:42.647749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.924 [2024-12-14 18:32:42.647760] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.924 [2024-12-14 18:32:42.647767] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.924 [2024-12-14 18:32:42.647774] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.924 [2024-12-14 18:32:42.647916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.924 [2024-12-14 18:32:42.648438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.924 [2024-12-14 18:32:42.649038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.924 [2024-12-14 18:32:42.649048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 [2024-12-14 18:32:43.419657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 Null1 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 [2024-12-14 18:32:43.463788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 Null2 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.861 Null3 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.861 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 Null4 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.862 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 4420 00:12:28.122 00:12:28.122 Discovery Log Number of Records 6, Generation counter 6 00:12:28.122 =====Discovery Log Entry 0====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: current discovery subsystem 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4420 00:12:28.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: explicit discovery connections, duplicate discovery information 00:12:28.122 sectype: none 00:12:28.122 =====Discovery Log Entry 1====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: nvme subsystem 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4420 00:12:28.122 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: none 00:12:28.122 sectype: none 00:12:28.122 =====Discovery Log Entry 2====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: nvme subsystem 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4420 00:12:28.122 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: none 00:12:28.122 sectype: none 00:12:28.122 =====Discovery Log Entry 3====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: nvme subsystem 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4420 00:12:28.122 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: none 00:12:28.122 sectype: none 00:12:28.122 =====Discovery Log Entry 4====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: nvme subsystem 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4420 00:12:28.122 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: none 00:12:28.122 sectype: none 00:12:28.122 =====Discovery Log Entry 5====== 00:12:28.122 trtype: tcp 00:12:28.122 adrfam: ipv4 00:12:28.122 subtype: discovery subsystem referral 00:12:28.122 treq: not required 00:12:28.122 portid: 0 00:12:28.122 trsvcid: 4430 00:12:28.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.122 traddr: 10.0.0.3 00:12:28.122 eflags: none 00:12:28.122 sectype: none 00:12:28.122 Perform nvmf subsystem discovery via RPC 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.122 [ 00:12:28.122 { 00:12:28.122 "allow_any_host": true, 00:12:28.122 "hosts": [], 00:12:28.122 "listen_addresses": [ 00:12:28.122 { 00:12:28.122 "adrfam": "IPv4", 00:12:28.122 "traddr": "10.0.0.3", 00:12:28.122 "trsvcid": "4420", 00:12:28.122 "trtype": "TCP" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.122 "subtype": "Discovery" 00:12:28.122 }, 00:12:28.122 { 00:12:28.122 "allow_any_host": true, 00:12:28.122 "hosts": [], 00:12:28.122 "listen_addresses": [ 00:12:28.122 { 00:12:28.122 "adrfam": "IPv4", 00:12:28.122 "traddr": "10.0.0.3", 00:12:28.122 "trsvcid": "4420", 00:12:28.122 "trtype": "TCP" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "max_cntlid": 65519, 00:12:28.122 "max_namespaces": 32, 00:12:28.122 "min_cntlid": 1, 00:12:28.122 "model_number": "SPDK bdev Controller", 00:12:28.122 "namespaces": [ 00:12:28.122 { 00:12:28.122 "bdev_name": "Null1", 00:12:28.122 "name": "Null1", 00:12:28.122 "nguid": "C53604D69FD348448FD1616762EB69D5", 00:12:28.122 "nsid": 1, 00:12:28.122 "uuid": "c53604d6-9fd3-4844-8fd1-616762eb69d5" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.122 "serial_number": "SPDK00000000000001", 00:12:28.122 "subtype": "NVMe" 00:12:28.122 }, 00:12:28.122 { 00:12:28.122 "allow_any_host": true, 00:12:28.122 "hosts": [], 00:12:28.122 "listen_addresses": [ 00:12:28.122 { 00:12:28.122 "adrfam": "IPv4", 00:12:28.122 "traddr": "10.0.0.3", 00:12:28.122 "trsvcid": "4420", 00:12:28.122 "trtype": "TCP" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "max_cntlid": 65519, 00:12:28.122 "max_namespaces": 32, 00:12:28.122 "min_cntlid": 1, 00:12:28.122 "model_number": "SPDK bdev Controller", 00:12:28.122 "namespaces": [ 00:12:28.122 { 00:12:28.122 "bdev_name": "Null2", 00:12:28.122 "name": "Null2", 00:12:28.122 "nguid": "FD0668169FAC4467A4C08855EE8FC548", 00:12:28.122 "nsid": 1, 00:12:28.122 "uuid": "fd066816-9fac-4467-a4c0-8855ee8fc548" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:28.122 "serial_number": "SPDK00000000000002", 00:12:28.122 "subtype": "NVMe" 00:12:28.122 }, 00:12:28.122 { 00:12:28.122 "allow_any_host": true, 00:12:28.122 "hosts": [], 00:12:28.122 "listen_addresses": [ 00:12:28.122 { 00:12:28.122 "adrfam": "IPv4", 00:12:28.122 "traddr": "10.0.0.3", 00:12:28.122 "trsvcid": "4420", 00:12:28.122 "trtype": "TCP" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "max_cntlid": 65519, 00:12:28.122 "max_namespaces": 32, 00:12:28.122 "min_cntlid": 1, 00:12:28.122 "model_number": "SPDK bdev Controller", 00:12:28.122 "namespaces": [ 00:12:28.122 { 00:12:28.122 "bdev_name": "Null3", 00:12:28.122 "name": "Null3", 00:12:28.122 "nguid": "B2E2CFBAF97B48D99E3006E8E689B1ED", 00:12:28.122 "nsid": 1, 00:12:28.122 "uuid": "b2e2cfba-f97b-48d9-9e30-06e8e689b1ed" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:28.122 "serial_number": "SPDK00000000000003", 00:12:28.122 "subtype": "NVMe" 00:12:28.122 }, 00:12:28.122 { 00:12:28.122 "allow_any_host": true, 00:12:28.122 "hosts": [], 00:12:28.122 "listen_addresses": [ 00:12:28.122 { 00:12:28.122 "adrfam": "IPv4", 00:12:28.122 "traddr": "10.0.0.3", 00:12:28.122 "trsvcid": "4420", 00:12:28.122 "trtype": "TCP" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "max_cntlid": 65519, 00:12:28.122 "max_namespaces": 32, 00:12:28.122 "min_cntlid": 1, 00:12:28.122 "model_number": "SPDK bdev Controller", 00:12:28.122 "namespaces": [ 00:12:28.122 { 00:12:28.122 "bdev_name": "Null4", 00:12:28.122 "name": "Null4", 00:12:28.122 "nguid": "3C4E19988DA04BD19EB4A017303ACC7B", 00:12:28.122 "nsid": 1, 00:12:28.122 "uuid": "3c4e1998-8da0-4bd1-9eb4-a017303acc7b" 00:12:28.122 } 00:12:28.122 ], 00:12:28.122 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:28.122 "serial_number": "SPDK00000000000004", 00:12:28.122 "subtype": "NVMe" 00:12:28.122 } 00:12:28.122 ] 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.122 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:28.123 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.382 rmmod nvme_tcp 00:12:28.382 rmmod nvme_fabrics 00:12:28.382 rmmod nvme_keyring 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 85615 ']' 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 85615 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 85615 ']' 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 85615 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85615 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.382 killing process with pid 85615 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85615' 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 85615 00:12:28.382 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 85615 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:28.642 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:12:28.901 00:12:28.901 real 0m2.749s 00:12:28.901 user 0m6.598s 00:12:28.901 sys 0m0.800s 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.901 ************************************ 00:12:28.901 END TEST nvmf_target_discovery 00:12:28.901 ************************************ 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.901 ************************************ 00:12:28.901 START TEST nvmf_referrals 00:12:28.901 ************************************ 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.901 * Looking for test storage... 00:12:28.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:28.901 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.161 --rc genhtml_branch_coverage=1 00:12:29.161 --rc genhtml_function_coverage=1 00:12:29.161 --rc genhtml_legend=1 00:12:29.161 --rc geninfo_all_blocks=1 00:12:29.161 --rc geninfo_unexecuted_blocks=1 00:12:29.161 00:12:29.161 ' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.161 --rc genhtml_branch_coverage=1 00:12:29.161 --rc genhtml_function_coverage=1 00:12:29.161 --rc genhtml_legend=1 00:12:29.161 --rc geninfo_all_blocks=1 00:12:29.161 --rc geninfo_unexecuted_blocks=1 00:12:29.161 00:12:29.161 ' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.161 --rc genhtml_branch_coverage=1 00:12:29.161 --rc genhtml_function_coverage=1 00:12:29.161 --rc genhtml_legend=1 00:12:29.161 --rc geninfo_all_blocks=1 00:12:29.161 --rc geninfo_unexecuted_blocks=1 00:12:29.161 00:12:29.161 ' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.161 --rc genhtml_branch_coverage=1 00:12:29.161 --rc genhtml_function_coverage=1 00:12:29.161 --rc genhtml_legend=1 00:12:29.161 --rc geninfo_all_blocks=1 00:12:29.161 --rc geninfo_unexecuted_blocks=1 00:12:29.161 00:12:29.161 ' 00:12:29.161 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:29.162 Cannot find device "nvmf_init_br" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:29.162 Cannot find device "nvmf_init_br2" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:29.162 Cannot find device "nvmf_tgt_br" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.162 Cannot find device "nvmf_tgt_br2" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:29.162 Cannot find device "nvmf_init_br" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:29.162 Cannot find device "nvmf_init_br2" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:29.162 Cannot find device "nvmf_tgt_br" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:29.162 Cannot find device "nvmf_tgt_br2" 00:12:29.162 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:29.163 Cannot find device "nvmf_br" 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:29.163 Cannot find device "nvmf_init_if" 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:29.163 Cannot find device "nvmf_init_if2" 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.163 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:29.422 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:29.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:29.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:12:29.422 00:12:29.422 --- 10.0.0.3 ping statistics --- 00:12:29.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.422 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:29.422 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:29.422 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:29.422 00:12:29.422 --- 10.0.0.4 ping statistics --- 00:12:29.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.422 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:29.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:29.422 00:12:29.422 --- 10.0.0.1 ping statistics --- 00:12:29.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.422 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:29.422 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:29.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:29.422 00:12:29.422 --- 10.0.0.2 ping statistics --- 00:12:29.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.423 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # return 0 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.423 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=85894 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 85894 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 85894 ']' 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.681 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.681 [2024-12-14 18:32:45.246010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:29.681 [2024-12-14 18:32:45.246112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.681 [2024-12-14 18:32:45.388868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.939 [2024-12-14 18:32:45.483291] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.939 [2024-12-14 18:32:45.483383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.939 [2024-12-14 18:32:45.483409] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.939 [2024-12-14 18:32:45.483420] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.939 [2024-12-14 18:32:45.483430] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.939 [2024-12-14 18:32:45.483632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.939 [2024-12-14 18:32:45.483720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.939 [2024-12-14 18:32:45.484469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.939 [2024-12-14 18:32:45.484509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.506 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.506 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:30.507 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:30.507 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.507 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 [2024-12-14 18:32:46.287730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 [2024-12-14 18:32:46.299920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:30.766 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.025 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:31.284 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.285 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.554 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.826 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.085 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:32.086 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.344 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:32.344 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:32.344 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:32.344 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:32.344 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.345 rmmod nvme_tcp 00:12:32.345 rmmod nvme_fabrics 00:12:32.345 rmmod nvme_keyring 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 85894 ']' 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 85894 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 85894 ']' 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 85894 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85894 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.345 killing process with pid 85894 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85894' 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 85894 00:12:32.345 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 85894 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:32.604 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:12:32.863 00:12:32.863 real 0m3.970s 00:12:32.863 user 0m12.069s 00:12:32.863 sys 0m1.086s 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.863 ************************************ 00:12:32.863 END TEST nvmf_referrals 00:12:32.863 ************************************ 00:12:32.863 18:32:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.864 18:32:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:32.864 18:32:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.864 18:32:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.864 ************************************ 00:12:32.864 START TEST nvmf_connect_disconnect 00:12:32.864 ************************************ 00:12:32.864 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.123 * Looking for test storage... 00:12:33.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:33.123 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.124 --rc genhtml_branch_coverage=1 00:12:33.124 --rc genhtml_function_coverage=1 00:12:33.124 --rc genhtml_legend=1 00:12:33.124 --rc geninfo_all_blocks=1 00:12:33.124 --rc geninfo_unexecuted_blocks=1 00:12:33.124 00:12:33.124 ' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.124 --rc genhtml_branch_coverage=1 00:12:33.124 --rc genhtml_function_coverage=1 00:12:33.124 --rc genhtml_legend=1 00:12:33.124 --rc geninfo_all_blocks=1 00:12:33.124 --rc geninfo_unexecuted_blocks=1 00:12:33.124 00:12:33.124 ' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.124 --rc genhtml_branch_coverage=1 00:12:33.124 --rc genhtml_function_coverage=1 00:12:33.124 --rc genhtml_legend=1 00:12:33.124 --rc geninfo_all_blocks=1 00:12:33.124 --rc geninfo_unexecuted_blocks=1 00:12:33.124 00:12:33.124 ' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.124 --rc genhtml_branch_coverage=1 00:12:33.124 --rc genhtml_function_coverage=1 00:12:33.124 --rc genhtml_legend=1 00:12:33.124 --rc geninfo_all_blocks=1 00:12:33.124 --rc geninfo_unexecuted_blocks=1 00:12:33.124 00:12:33.124 ' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.124 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:33.125 Cannot find device "nvmf_init_br" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:33.125 Cannot find device "nvmf_init_br2" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:33.125 Cannot find device "nvmf_tgt_br" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.125 Cannot find device "nvmf_tgt_br2" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:33.125 Cannot find device "nvmf_init_br" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:33.125 Cannot find device "nvmf_init_br2" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:33.125 Cannot find device "nvmf_tgt_br" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:33.125 Cannot find device "nvmf_tgt_br2" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:33.125 Cannot find device "nvmf_br" 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:12:33.125 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:33.384 Cannot find device "nvmf_init_if" 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:33.384 Cannot find device "nvmf_init_if2" 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:33.384 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.384 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:33.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:12:33.643 00:12:33.643 --- 10.0.0.3 ping statistics --- 00:12:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.643 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:33.643 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:33.643 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:12:33.643 00:12:33.643 --- 10.0.0.4 ping statistics --- 00:12:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.643 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:33.643 00:12:33.643 --- 10.0.0.1 ping statistics --- 00:12:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.643 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:33.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:12:33.643 00:12:33.643 --- 10.0.0.2 ping statistics --- 00:12:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.643 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:33.643 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # return 0 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=86257 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 86257 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 86257 ']' 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.644 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.644 [2024-12-14 18:32:49.272283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:33.644 [2024-12-14 18:32:49.272389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.902 [2024-12-14 18:32:49.409964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.902 [2024-12-14 18:32:49.488512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.902 [2024-12-14 18:32:49.488575] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.902 [2024-12-14 18:32:49.488586] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.902 [2024-12-14 18:32:49.488593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.902 [2024-12-14 18:32:49.488648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.902 [2024-12-14 18:32:49.488746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.902 [2024-12-14 18:32:49.488809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.902 [2024-12-14 18:32:49.489708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.902 [2024-12-14 18:32:49.489718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.902 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.902 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:33.902 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:33.902 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.902 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 [2024-12-14 18:32:49.697851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 [2024-12-14 18:32:49.764355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:34.161 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:36.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:18.130 rmmod nvme_tcp 00:16:18.130 rmmod nvme_fabrics 00:16:18.130 rmmod nvme_keyring 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 86257 ']' 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 86257 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 86257 ']' 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 86257 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86257 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86257' 00:16:18.130 killing process with pid 86257 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 86257 00:16:18.130 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 86257 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:18.389 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:18.389 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:16:18.648 00:16:18.648 real 3m45.624s 00:16:18.648 user 14m36.943s 00:16:18.648 sys 0m23.315s 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:18.648 ************************************ 00:16:18.648 END TEST nvmf_connect_disconnect 00:16:18.648 ************************************ 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.648 ************************************ 00:16:18.648 START TEST nvmf_multitarget 00:16:18.648 ************************************ 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:18.648 * Looking for test storage... 00:16:18.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:18.648 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:18.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.907 --rc genhtml_branch_coverage=1 00:16:18.907 --rc genhtml_function_coverage=1 00:16:18.907 --rc genhtml_legend=1 00:16:18.907 --rc geninfo_all_blocks=1 00:16:18.907 --rc geninfo_unexecuted_blocks=1 00:16:18.907 00:16:18.907 ' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:18.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.907 --rc genhtml_branch_coverage=1 00:16:18.907 --rc genhtml_function_coverage=1 00:16:18.907 --rc genhtml_legend=1 00:16:18.907 --rc geninfo_all_blocks=1 00:16:18.907 --rc geninfo_unexecuted_blocks=1 00:16:18.907 00:16:18.907 ' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:18.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.907 --rc genhtml_branch_coverage=1 00:16:18.907 --rc genhtml_function_coverage=1 00:16:18.907 --rc genhtml_legend=1 00:16:18.907 --rc geninfo_all_blocks=1 00:16:18.907 --rc geninfo_unexecuted_blocks=1 00:16:18.907 00:16:18.907 ' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:18.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.907 --rc genhtml_branch_coverage=1 00:16:18.907 --rc genhtml_function_coverage=1 00:16:18.907 --rc genhtml_legend=1 00:16:18.907 --rc geninfo_all_blocks=1 00:16:18.907 --rc geninfo_unexecuted_blocks=1 00:16:18.907 00:16:18.907 ' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.907 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:18.908 Cannot find device "nvmf_init_br" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:18.908 Cannot find device "nvmf_init_br2" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:18.908 Cannot find device "nvmf_tgt_br" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.908 Cannot find device "nvmf_tgt_br2" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:18.908 Cannot find device "nvmf_init_br" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:18.908 Cannot find device "nvmf_init_br2" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:18.908 Cannot find device "nvmf_tgt_br" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:18.908 Cannot find device "nvmf_tgt_br2" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:18.908 Cannot find device "nvmf_br" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:18.908 Cannot find device "nvmf_init_if" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:18.908 Cannot find device "nvmf_init_if2" 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.908 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:19.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:19.166 00:16:19.166 --- 10.0.0.3 ping statistics --- 00:16:19.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.166 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:19.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:19.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:16:19.166 00:16:19.166 --- 10.0.0.4 ping statistics --- 00:16:19.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.166 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:19.166 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:19.166 00:16:19.166 --- 10.0.0.1 ping statistics --- 00:16:19.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.166 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:19.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:16:19.167 00:16:19.167 --- 10.0.0.2 ping statistics --- 00:16:19.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.167 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # return 0 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=90036 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 90036 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 90036 ']' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.167 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:19.425 [2024-12-14 18:36:34.945485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:19.425 [2024-12-14 18:36:34.945584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.425 [2024-12-14 18:36:35.090151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.683 [2024-12-14 18:36:35.188345] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.683 [2024-12-14 18:36:35.188416] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.683 [2024-12-14 18:36:35.188431] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.683 [2024-12-14 18:36:35.188442] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.683 [2024-12-14 18:36:35.188451] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.683 [2024-12-14 18:36:35.189437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.683 [2024-12-14 18:36:35.189640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.683 [2024-12-14 18:36:35.189719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.683 [2024-12-14 18:36:35.189726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.247 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.247 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:20.247 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:20.247 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.247 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:20.503 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:20.762 "nvmf_tgt_1" 00:16:20.762 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:20.762 "nvmf_tgt_2" 00:16:20.762 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:20.762 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:21.020 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:21.020 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:21.020 true 00:16:21.020 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:21.278 true 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:21.278 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:21.278 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.278 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:21.278 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.537 rmmod nvme_tcp 00:16:21.537 rmmod nvme_fabrics 00:16:21.537 rmmod nvme_keyring 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 90036 ']' 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 90036 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 90036 ']' 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 90036 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90036 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90036' 00:16:21.537 killing process with pid 90036 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 90036 00:16:21.537 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 90036 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:21.795 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:16:22.054 00:16:22.054 real 0m3.424s 00:16:22.054 user 0m10.058s 00:16:22.054 sys 0m0.908s 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:22.054 ************************************ 00:16:22.054 END TEST nvmf_multitarget 00:16:22.054 ************************************ 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.054 ************************************ 00:16:22.054 START TEST nvmf_rpc 00:16:22.054 ************************************ 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:22.054 * Looking for test storage... 00:16:22.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:22.054 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:22.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.313 --rc genhtml_branch_coverage=1 00:16:22.313 --rc genhtml_function_coverage=1 00:16:22.313 --rc genhtml_legend=1 00:16:22.313 --rc geninfo_all_blocks=1 00:16:22.313 --rc geninfo_unexecuted_blocks=1 00:16:22.313 00:16:22.313 ' 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:22.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.313 --rc genhtml_branch_coverage=1 00:16:22.313 --rc genhtml_function_coverage=1 00:16:22.313 --rc genhtml_legend=1 00:16:22.313 --rc geninfo_all_blocks=1 00:16:22.313 --rc geninfo_unexecuted_blocks=1 00:16:22.313 00:16:22.313 ' 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:22.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.313 --rc genhtml_branch_coverage=1 00:16:22.313 --rc genhtml_function_coverage=1 00:16:22.313 --rc genhtml_legend=1 00:16:22.313 --rc geninfo_all_blocks=1 00:16:22.313 --rc geninfo_unexecuted_blocks=1 00:16:22.313 00:16:22.313 ' 00:16:22.313 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:22.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.313 --rc genhtml_branch_coverage=1 00:16:22.313 --rc genhtml_function_coverage=1 00:16:22.313 --rc genhtml_legend=1 00:16:22.314 --rc geninfo_all_blocks=1 00:16:22.314 --rc geninfo_unexecuted_blocks=1 00:16:22.314 00:16:22.314 ' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:22.314 Cannot find device "nvmf_init_br" 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:22.314 Cannot find device "nvmf_init_br2" 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:22.314 Cannot find device "nvmf_tgt_br" 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.314 Cannot find device "nvmf_tgt_br2" 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:16:22.314 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:22.314 Cannot find device "nvmf_init_br" 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:22.314 Cannot find device "nvmf_init_br2" 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:22.314 Cannot find device "nvmf_tgt_br" 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:22.314 Cannot find device "nvmf_tgt_br2" 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:22.314 Cannot find device "nvmf_br" 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:16:22.314 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:22.573 Cannot find device "nvmf_init_if" 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:22.573 Cannot find device "nvmf_init_if2" 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.573 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:22.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:22.832 00:16:22.832 --- 10.0.0.3 ping statistics --- 00:16:22.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.832 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:22.832 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:22.832 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:22.832 00:16:22.832 --- 10.0.0.4 ping statistics --- 00:16:22.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.832 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:22.832 00:16:22.832 --- 10.0.0.1 ping statistics --- 00:16:22.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.832 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:22.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:22.832 00:16:22.832 --- 10.0.0.2 ping statistics --- 00:16:22.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.832 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # return 0 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=90329 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 90329 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 90329 ']' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.832 18:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.832 [2024-12-14 18:36:38.454304] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:22.832 [2024-12-14 18:36:38.454407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.091 [2024-12-14 18:36:38.591223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.091 [2024-12-14 18:36:38.686235] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.091 [2024-12-14 18:36:38.686304] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.091 [2024-12-14 18:36:38.686315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.091 [2024-12-14 18:36:38.686323] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.091 [2024-12-14 18:36:38.686329] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.091 [2024-12-14 18:36:38.686500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.091 [2024-12-14 18:36:38.686710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.091 [2024-12-14 18:36:38.687548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.091 [2024-12-14 18:36:38.687583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:24.026 "poll_groups": [ 00:16:24.026 { 00:16:24.026 "admin_qpairs": 0, 00:16:24.026 "completed_nvme_io": 0, 00:16:24.026 "current_admin_qpairs": 0, 00:16:24.026 "current_io_qpairs": 0, 00:16:24.026 "io_qpairs": 0, 00:16:24.026 "name": "nvmf_tgt_poll_group_000", 00:16:24.026 "pending_bdev_io": 0, 00:16:24.026 "transports": [] 00:16:24.026 }, 00:16:24.026 { 00:16:24.026 "admin_qpairs": 0, 00:16:24.026 "completed_nvme_io": 0, 00:16:24.026 "current_admin_qpairs": 0, 00:16:24.026 "current_io_qpairs": 0, 00:16:24.026 "io_qpairs": 0, 00:16:24.026 "name": "nvmf_tgt_poll_group_001", 00:16:24.026 "pending_bdev_io": 0, 00:16:24.026 "transports": [] 00:16:24.026 }, 00:16:24.026 { 00:16:24.026 "admin_qpairs": 0, 00:16:24.026 "completed_nvme_io": 0, 00:16:24.026 "current_admin_qpairs": 0, 00:16:24.026 "current_io_qpairs": 0, 00:16:24.026 "io_qpairs": 0, 00:16:24.026 "name": "nvmf_tgt_poll_group_002", 00:16:24.026 "pending_bdev_io": 0, 00:16:24.026 "transports": [] 00:16:24.026 }, 00:16:24.026 { 00:16:24.026 "admin_qpairs": 0, 00:16:24.026 "completed_nvme_io": 0, 00:16:24.026 "current_admin_qpairs": 0, 00:16:24.026 "current_io_qpairs": 0, 00:16:24.026 "io_qpairs": 0, 00:16:24.026 "name": "nvmf_tgt_poll_group_003", 00:16:24.026 "pending_bdev_io": 0, 00:16:24.026 "transports": [] 00:16:24.026 } 00:16:24.026 ], 00:16:24.026 "tick_rate": 2200000000 00:16:24.026 }' 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:24.026 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.027 [2024-12-14 18:36:39.633670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:24.027 "poll_groups": [ 00:16:24.027 { 00:16:24.027 "admin_qpairs": 0, 00:16:24.027 "completed_nvme_io": 0, 00:16:24.027 "current_admin_qpairs": 0, 00:16:24.027 "current_io_qpairs": 0, 00:16:24.027 "io_qpairs": 0, 00:16:24.027 "name": "nvmf_tgt_poll_group_000", 00:16:24.027 "pending_bdev_io": 0, 00:16:24.027 "transports": [ 00:16:24.027 { 00:16:24.027 "trtype": "TCP" 00:16:24.027 } 00:16:24.027 ] 00:16:24.027 }, 00:16:24.027 { 00:16:24.027 "admin_qpairs": 0, 00:16:24.027 "completed_nvme_io": 0, 00:16:24.027 "current_admin_qpairs": 0, 00:16:24.027 "current_io_qpairs": 0, 00:16:24.027 "io_qpairs": 0, 00:16:24.027 "name": "nvmf_tgt_poll_group_001", 00:16:24.027 "pending_bdev_io": 0, 00:16:24.027 "transports": [ 00:16:24.027 { 00:16:24.027 "trtype": "TCP" 00:16:24.027 } 00:16:24.027 ] 00:16:24.027 }, 00:16:24.027 { 00:16:24.027 "admin_qpairs": 0, 00:16:24.027 "completed_nvme_io": 0, 00:16:24.027 "current_admin_qpairs": 0, 00:16:24.027 "current_io_qpairs": 0, 00:16:24.027 "io_qpairs": 0, 00:16:24.027 "name": "nvmf_tgt_poll_group_002", 00:16:24.027 "pending_bdev_io": 0, 00:16:24.027 "transports": [ 00:16:24.027 { 00:16:24.027 "trtype": "TCP" 00:16:24.027 } 00:16:24.027 ] 00:16:24.027 }, 00:16:24.027 { 00:16:24.027 "admin_qpairs": 0, 00:16:24.027 "completed_nvme_io": 0, 00:16:24.027 "current_admin_qpairs": 0, 00:16:24.027 "current_io_qpairs": 0, 00:16:24.027 "io_qpairs": 0, 00:16:24.027 "name": "nvmf_tgt_poll_group_003", 00:16:24.027 "pending_bdev_io": 0, 00:16:24.027 "transports": [ 00:16:24.027 { 00:16:24.027 "trtype": "TCP" 00:16:24.027 } 00:16:24.027 ] 00:16:24.027 } 00:16:24.027 ], 00:16:24.027 "tick_rate": 2200000000 00:16:24.027 }' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:24.027 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 Malloc1 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 [2024-12-14 18:36:39.844378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.3 -s 4420 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.3 -s 4420 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.3 -s 4420 00:16:24.286 [2024-12-14 18:36:39.872950] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6' 00:16:24.286 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:24.286 could not add new controller: failed to write to nvme-fabrics device 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.286 18:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:24.544 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.544 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.544 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.544 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.544 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:26.446 [2024-12-14 18:36:42.184490] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6' 00:16:26.446 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:26.446 could not add new controller: failed to write to nvme-fabrics device 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.446 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:26.704 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:29.237 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:29.237 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 [2024-12-14 18:36:44.478168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:29.238 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 [2024-12-14 18:36:46.794493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.140 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:31.399 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.399 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.399 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.399 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:31.399 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.302 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.302 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.302 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:33.302 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 [2024-12-14 18:36:49.099164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:33.561 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 [2024-12-14 18:36:51.416841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:36.094 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.999 [2024-12-14 18:36:53.733378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.999 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.259 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.206 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.465 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.465 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 [2024-12-14 18:36:56.065800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.465 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 [2024-12-14 18:36:56.113882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 [2024-12-14 18:36:56.161913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 [2024-12-14 18:36:56.210015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.725 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 [2024-12-14 18:36:56.258076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:40.726 "poll_groups": [ 00:16:40.726 { 00:16:40.726 "admin_qpairs": 2, 00:16:40.726 "completed_nvme_io": 67, 00:16:40.726 "current_admin_qpairs": 0, 00:16:40.726 "current_io_qpairs": 0, 00:16:40.726 "io_qpairs": 16, 00:16:40.726 "name": "nvmf_tgt_poll_group_000", 00:16:40.726 "pending_bdev_io": 0, 00:16:40.726 "transports": [ 00:16:40.726 { 00:16:40.726 "trtype": "TCP" 00:16:40.726 } 00:16:40.726 ] 00:16:40.726 }, 00:16:40.726 { 00:16:40.726 "admin_qpairs": 3, 00:16:40.726 "completed_nvme_io": 115, 00:16:40.726 "current_admin_qpairs": 0, 00:16:40.726 "current_io_qpairs": 0, 00:16:40.726 "io_qpairs": 17, 00:16:40.726 "name": "nvmf_tgt_poll_group_001", 00:16:40.726 "pending_bdev_io": 0, 00:16:40.726 "transports": [ 00:16:40.726 { 00:16:40.726 "trtype": "TCP" 00:16:40.726 } 00:16:40.726 ] 00:16:40.726 }, 00:16:40.726 { 00:16:40.726 "admin_qpairs": 1, 00:16:40.726 "completed_nvme_io": 169, 00:16:40.726 "current_admin_qpairs": 0, 00:16:40.726 "current_io_qpairs": 0, 00:16:40.726 "io_qpairs": 19, 00:16:40.726 "name": "nvmf_tgt_poll_group_002", 00:16:40.726 "pending_bdev_io": 0, 00:16:40.726 "transports": [ 00:16:40.726 { 00:16:40.726 "trtype": "TCP" 00:16:40.726 } 00:16:40.726 ] 00:16:40.726 }, 00:16:40.726 { 00:16:40.726 "admin_qpairs": 1, 00:16:40.726 "completed_nvme_io": 69, 00:16:40.726 "current_admin_qpairs": 0, 00:16:40.726 "current_io_qpairs": 0, 00:16:40.726 "io_qpairs": 18, 00:16:40.726 "name": "nvmf_tgt_poll_group_003", 00:16:40.726 "pending_bdev_io": 0, 00:16:40.726 "transports": [ 00:16:40.726 { 00:16:40.726 "trtype": "TCP" 00:16:40.726 } 00:16:40.726 ] 00:16:40.726 } 00:16:40.726 ], 00:16:40.726 "tick_rate": 2200000000 00:16:40.726 }' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.726 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:40.985 rmmod nvme_tcp 00:16:40.985 rmmod nvme_fabrics 00:16:40.985 rmmod nvme_keyring 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 90329 ']' 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 90329 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 90329 ']' 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 90329 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90329 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.985 killing process with pid 90329 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90329' 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 90329 00:16:40.985 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 90329 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:41.244 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:41.503 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:16:41.503 00:16:41.503 real 0m19.396s 00:16:41.503 user 1m12.089s 00:16:41.503 sys 0m2.209s 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.503 ************************************ 00:16:41.503 END TEST nvmf_rpc 00:16:41.503 ************************************ 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:41.503 ************************************ 00:16:41.503 START TEST nvmf_invalid 00:16:41.503 ************************************ 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:41.503 * Looking for test storage... 00:16:41.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:41.503 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:41.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.763 --rc genhtml_branch_coverage=1 00:16:41.763 --rc genhtml_function_coverage=1 00:16:41.763 --rc genhtml_legend=1 00:16:41.763 --rc geninfo_all_blocks=1 00:16:41.763 --rc geninfo_unexecuted_blocks=1 00:16:41.763 00:16:41.763 ' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:41.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.763 --rc genhtml_branch_coverage=1 00:16:41.763 --rc genhtml_function_coverage=1 00:16:41.763 --rc genhtml_legend=1 00:16:41.763 --rc geninfo_all_blocks=1 00:16:41.763 --rc geninfo_unexecuted_blocks=1 00:16:41.763 00:16:41.763 ' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:41.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.763 --rc genhtml_branch_coverage=1 00:16:41.763 --rc genhtml_function_coverage=1 00:16:41.763 --rc genhtml_legend=1 00:16:41.763 --rc geninfo_all_blocks=1 00:16:41.763 --rc geninfo_unexecuted_blocks=1 00:16:41.763 00:16:41.763 ' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:41.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.763 --rc genhtml_branch_coverage=1 00:16:41.763 --rc genhtml_function_coverage=1 00:16:41.763 --rc genhtml_legend=1 00:16:41.763 --rc geninfo_all_blocks=1 00:16:41.763 --rc geninfo_unexecuted_blocks=1 00:16:41.763 00:16:41.763 ' 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.763 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:41.764 Cannot find device "nvmf_init_br" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:41.764 Cannot find device "nvmf_init_br2" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:41.764 Cannot find device "nvmf_tgt_br" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.764 Cannot find device "nvmf_tgt_br2" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:41.764 Cannot find device "nvmf_init_br" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:41.764 Cannot find device "nvmf_init_br2" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:41.764 Cannot find device "nvmf_tgt_br" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:41.764 Cannot find device "nvmf_tgt_br2" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:41.764 Cannot find device "nvmf_br" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:41.764 Cannot find device "nvmf_init_if" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:41.764 Cannot find device "nvmf_init_if2" 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:16:41.764 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:41.765 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.023 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.023 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.023 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.023 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:42.024 00:16:42.024 --- 10.0.0.3 ping statistics --- 00:16:42.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.024 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:16:42.024 00:16:42.024 --- 10.0.0.4 ping statistics --- 00:16:42.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.024 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:42.024 00:16:42.024 --- 10.0.0.1 ping statistics --- 00:16:42.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.024 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:42.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:42.024 00:16:42.024 --- 10.0.0.2 ping statistics --- 00:16:42.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.024 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # return 0 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=90891 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 90891 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 90891 ']' 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.024 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:42.283 [2024-12-14 18:36:57.834246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:42.283 [2024-12-14 18:36:57.834335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.283 [2024-12-14 18:36:57.970371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.541 [2024-12-14 18:36:58.049724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.541 [2024-12-14 18:36:58.049796] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.541 [2024-12-14 18:36:58.049807] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.541 [2024-12-14 18:36:58.049815] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.541 [2024-12-14 18:36:58.049821] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.541 [2024-12-14 18:36:58.050009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.541 [2024-12-14 18:36:58.050139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.541 [2024-12-14 18:36:58.050809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.541 [2024-12-14 18:36:58.050835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.108 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:43.109 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6619 00:16:43.676 [2024-12-14 18:36:59.119957] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6619 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:43.676 request: 00:16:43.676 { 00:16:43.676 "method": "nvmf_create_subsystem", 00:16:43.676 "params": { 00:16:43.676 "nqn": "nqn.2016-06.io.spdk:cnode6619", 00:16:43.676 "tgt_name": "foobar" 00:16:43.676 } 00:16:43.676 } 00:16:43.676 Got JSON-RPC error response 00:16:43.676 GoRPCClient: error on JSON-RPC call' 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6619 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:43.676 request: 00:16:43.676 { 00:16:43.676 "method": "nvmf_create_subsystem", 00:16:43.676 "params": { 00:16:43.676 "nqn": "nqn.2016-06.io.spdk:cnode6619", 00:16:43.676 "tgt_name": "foobar" 00:16:43.676 } 00:16:43.676 } 00:16:43.676 Got JSON-RPC error response 00:16:43.676 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13965 00:16:43.676 [2024-12-14 18:36:59.360271] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13965: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13965 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:43.676 request: 00:16:43.676 { 00:16:43.676 "method": "nvmf_create_subsystem", 00:16:43.676 "params": { 00:16:43.676 "nqn": "nqn.2016-06.io.spdk:cnode13965", 00:16:43.676 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:43.676 } 00:16:43.676 } 00:16:43.676 Got JSON-RPC error response 00:16:43.676 GoRPCClient: error on JSON-RPC call' 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13965 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:43.676 request: 00:16:43.676 { 00:16:43.676 "method": "nvmf_create_subsystem", 00:16:43.676 "params": { 00:16:43.676 "nqn": "nqn.2016-06.io.spdk:cnode13965", 00:16:43.676 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:43.676 } 00:16:43.676 } 00:16:43.676 Got JSON-RPC error response 00:16:43.676 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:43.676 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27580 00:16:43.935 [2024-12-14 18:36:59.616581] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27580: invalid model number 'SPDK_Controller' 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27580], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:43.935 request: 00:16:43.935 { 00:16:43.935 "method": "nvmf_create_subsystem", 00:16:43.935 "params": { 00:16:43.935 "nqn": "nqn.2016-06.io.spdk:cnode27580", 00:16:43.935 "model_number": "SPDK_Controller\u001f" 00:16:43.935 } 00:16:43.935 } 00:16:43.935 Got JSON-RPC error response 00:16:43.935 GoRPCClient: error on JSON-RPC call' 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/14 18:36:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27580], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:43.935 request: 00:16:43.935 { 00:16:43.935 "method": "nvmf_create_subsystem", 00:16:43.935 "params": { 00:16:43.935 "nqn": "nqn.2016-06.io.spdk:cnode27580", 00:16:43.935 "model_number": "SPDK_Controller\u001f" 00:16:43.935 } 00:16:43.935 } 00:16:43.935 Got JSON-RPC error response 00:16:43.935 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:43.935 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:43.936 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:16:44.195 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@X'\''v:r1' 00:16:44.196 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '@X'\''v:r1' nqn.2016-06.io.spdk:cnode7352 00:16:44.455 [2024-12-14 18:37:00.057180] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7352: invalid serial number '@X'v:r1' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/14 18:37:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7352 serial_number:@X'\''v:r1], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN @X'\''v:r1 00:16:44.455 request: 00:16:44.455 { 00:16:44.455 "method": "nvmf_create_subsystem", 00:16:44.455 "params": { 00:16:44.455 "nqn": "nqn.2016-06.io.spdk:cnode7352", 00:16:44.455 "serial_number": "@X'\''v:r1" 00:16:44.455 } 00:16:44.455 } 00:16:44.455 Got JSON-RPC error response 00:16:44.455 GoRPCClient: error on JSON-RPC call' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/14 18:37:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7352 serial_number:@X'v:r1], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN @X'v:r1 00:16:44.455 request: 00:16:44.455 { 00:16:44.455 "method": "nvmf_create_subsystem", 00:16:44.455 "params": { 00:16:44.455 "nqn": "nqn.2016-06.io.spdk:cnode7352", 00:16:44.455 "serial_number": "@X'v:r1" 00:16:44.455 } 00:16:44.455 } 00:16:44.455 Got JSON-RPC error response 00:16:44.455 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:44.455 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.456 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:16:44.715 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OQtJXZi?3xisCYC *R|iM /dev/null' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:16:48.128 00:16:48.128 real 0m6.493s 00:16:48.128 user 0m24.869s 00:16:48.128 sys 0m1.516s 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:48.128 ************************************ 00:16:48.128 END TEST nvmf_invalid 00:16:48.128 ************************************ 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.128 ************************************ 00:16:48.128 START TEST nvmf_connect_stress 00:16:48.128 ************************************ 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:48.128 * Looking for test storage... 00:16:48.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.128 --rc genhtml_branch_coverage=1 00:16:48.128 --rc genhtml_function_coverage=1 00:16:48.128 --rc genhtml_legend=1 00:16:48.128 --rc geninfo_all_blocks=1 00:16:48.128 --rc geninfo_unexecuted_blocks=1 00:16:48.128 00:16:48.128 ' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.128 --rc genhtml_branch_coverage=1 00:16:48.128 --rc genhtml_function_coverage=1 00:16:48.128 --rc genhtml_legend=1 00:16:48.128 --rc geninfo_all_blocks=1 00:16:48.128 --rc geninfo_unexecuted_blocks=1 00:16:48.128 00:16:48.128 ' 00:16:48.128 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.128 --rc genhtml_branch_coverage=1 00:16:48.128 --rc genhtml_function_coverage=1 00:16:48.128 --rc genhtml_legend=1 00:16:48.128 --rc geninfo_all_blocks=1 00:16:48.128 --rc geninfo_unexecuted_blocks=1 00:16:48.128 00:16:48.128 ' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:48.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.389 --rc genhtml_branch_coverage=1 00:16:48.389 --rc genhtml_function_coverage=1 00:16:48.389 --rc genhtml_legend=1 00:16:48.389 --rc geninfo_all_blocks=1 00:16:48.389 --rc geninfo_unexecuted_blocks=1 00:16:48.389 00:16:48.389 ' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.389 Cannot find device "nvmf_init_br" 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:48.389 Cannot find device "nvmf_init_br2" 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:48.389 Cannot find device "nvmf_tgt_br" 00:16:48.389 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.390 Cannot find device "nvmf_tgt_br2" 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:48.390 Cannot find device "nvmf_init_br" 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:48.390 Cannot find device "nvmf_init_br2" 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:16:48.390 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:48.390 Cannot find device "nvmf_tgt_br" 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:48.390 Cannot find device "nvmf_tgt_br2" 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:48.390 Cannot find device "nvmf_br" 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:48.390 Cannot find device "nvmf_init_if" 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:48.390 Cannot find device "nvmf_init_if2" 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.390 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.649 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:48.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:16:48.650 00:16:48.650 --- 10.0.0.3 ping statistics --- 00:16:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.650 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:48.650 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:48.650 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:16:48.650 00:16:48.650 --- 10.0.0.4 ping statistics --- 00:16:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.650 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:48.650 00:16:48.650 --- 10.0.0.1 ping statistics --- 00:16:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.650 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:48.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:48.650 00:16:48.650 --- 10.0.0.2 ping statistics --- 00:16:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.650 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # return 0 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=91453 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 91453 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 91453 ']' 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.650 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.909 [2024-12-14 18:37:04.444936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:48.909 [2024-12-14 18:37:04.445041] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.909 [2024-12-14 18:37:04.589830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.167 [2024-12-14 18:37:04.679020] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.167 [2024-12-14 18:37:04.679101] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.167 [2024-12-14 18:37:04.679117] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.167 [2024-12-14 18:37:04.679128] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.167 [2024-12-14 18:37:04.679137] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.167 [2024-12-14 18:37:04.679316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.167 [2024-12-14 18:37:04.679931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.167 [2024-12-14 18:37:04.679996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.105 [2024-12-14 18:37:05.559621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.105 [2024-12-14 18:37:05.586051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.105 NULL1 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=91506 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.105 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.106 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.364 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.364 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:50.364 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.364 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.364 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.623 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.623 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:50.623 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.623 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.623 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.191 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.191 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:51.192 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.192 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.192 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.451 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.451 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:51.451 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.451 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.451 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.709 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.710 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:51.710 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.710 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.710 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.968 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.968 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:51.968 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.968 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.969 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.227 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.227 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:52.227 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.227 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.227 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.794 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.794 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:52.794 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.794 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.794 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.052 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.052 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:53.052 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.052 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.052 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.311 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.311 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:53.311 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.311 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.311 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.569 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.569 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:53.569 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.569 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.569 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.828 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.828 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:53.828 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.828 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.828 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.394 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.394 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:54.394 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.394 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.394 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.653 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.653 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:54.653 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.653 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.653 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.912 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.912 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:54.912 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.912 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.912 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.171 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.171 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:55.171 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.171 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.171 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.429 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.429 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:55.429 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.429 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.429 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.997 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.997 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:55.997 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.997 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.997 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.256 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.256 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:56.256 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.256 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.256 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.514 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.514 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:56.514 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.514 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.514 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.773 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.773 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:56.773 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.773 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.773 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.341 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.341 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:57.341 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.341 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.341 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.600 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:57.600 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.600 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.600 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.858 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.858 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:57.859 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.859 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.859 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.117 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.117 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:58.117 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.117 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.117 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.376 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.376 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:58.376 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.376 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.376 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.943 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.943 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:58.943 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.943 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.943 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.202 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.202 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:59.202 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.202 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.202 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.462 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.462 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:59.462 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.462 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.462 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.725 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.725 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:16:59.725 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.725 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.725 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.006 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.006 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:17:00.006 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.006 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.006 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.273 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91506 00:17:00.273 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (91506) - No such process 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 91506 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:00.273 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.532 rmmod nvme_tcp 00:17:00.532 rmmod nvme_fabrics 00:17:00.532 rmmod nvme_keyring 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 91453 ']' 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 91453 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 91453 ']' 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 91453 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91453 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:00.532 killing process with pid 91453 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91453' 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 91453 00:17:00.532 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 91453 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:00.791 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:17:01.050 00:17:01.050 real 0m12.955s 00:17:01.050 user 0m41.892s 00:17:01.050 sys 0m3.412s 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.050 ************************************ 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 END TEST nvmf_connect_stress 00:17:01.050 ************************************ 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 ************************************ 00:17:01.050 START TEST nvmf_fused_ordering 00:17:01.050 ************************************ 00:17:01.050 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:01.309 * Looking for test storage... 00:17:01.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:01.309 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:01.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.310 --rc genhtml_branch_coverage=1 00:17:01.310 --rc genhtml_function_coverage=1 00:17:01.310 --rc genhtml_legend=1 00:17:01.310 --rc geninfo_all_blocks=1 00:17:01.310 --rc geninfo_unexecuted_blocks=1 00:17:01.310 00:17:01.310 ' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:01.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.310 --rc genhtml_branch_coverage=1 00:17:01.310 --rc genhtml_function_coverage=1 00:17:01.310 --rc genhtml_legend=1 00:17:01.310 --rc geninfo_all_blocks=1 00:17:01.310 --rc geninfo_unexecuted_blocks=1 00:17:01.310 00:17:01.310 ' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:01.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.310 --rc genhtml_branch_coverage=1 00:17:01.310 --rc genhtml_function_coverage=1 00:17:01.310 --rc genhtml_legend=1 00:17:01.310 --rc geninfo_all_blocks=1 00:17:01.310 --rc geninfo_unexecuted_blocks=1 00:17:01.310 00:17:01.310 ' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:01.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.310 --rc genhtml_branch_coverage=1 00:17:01.310 --rc genhtml_function_coverage=1 00:17:01.310 --rc genhtml_legend=1 00:17:01.310 --rc geninfo_all_blocks=1 00:17:01.310 --rc geninfo_unexecuted_blocks=1 00:17:01.310 00:17:01.310 ' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.310 Cannot find device "nvmf_init_br" 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:17:01.310 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.310 Cannot find device "nvmf_init_br2" 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.311 Cannot find device "nvmf_tgt_br" 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.311 Cannot find device "nvmf_tgt_br2" 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:17:01.311 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.311 Cannot find device "nvmf_init_br" 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.311 Cannot find device "nvmf_init_br2" 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.311 Cannot find device "nvmf_tgt_br" 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.311 Cannot find device "nvmf_tgt_br2" 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.311 Cannot find device "nvmf_br" 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:17:01.311 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.570 Cannot find device "nvmf_init_if" 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.570 Cannot find device "nvmf_init_if2" 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:01.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:01.570 00:17:01.570 --- 10.0.0.3 ping statistics --- 00:17:01.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.570 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:01.570 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:01.570 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:01.570 00:17:01.570 --- 10.0.0.4 ping statistics --- 00:17:01.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.570 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:01.570 00:17:01.570 --- 10.0.0.1 ping statistics --- 00:17:01.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.570 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:01.570 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:01.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:17:01.830 00:17:01.830 --- 10.0.0.2 ping statistics --- 00:17:01.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.830 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # return 0 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=91901 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 91901 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 91901 ']' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.830 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.830 [2024-12-14 18:37:17.444174] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:01.830 [2024-12-14 18:37:17.444275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.089 [2024-12-14 18:37:17.583731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.089 [2024-12-14 18:37:17.670925] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.089 [2024-12-14 18:37:17.670985] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.089 [2024-12-14 18:37:17.670995] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.089 [2024-12-14 18:37:17.671002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.089 [2024-12-14 18:37:17.671013] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.089 [2024-12-14 18:37:17.671042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.656 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.656 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:02.656 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:02.656 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.656 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.915 [2024-12-14 18:37:18.447306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.915 [2024-12-14 18:37:18.463428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.915 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.916 NULL1 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.916 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:02.916 [2024-12-14 18:37:18.515063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.916 [2024-12-14 18:37:18.515146] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91951 ] 00:17:03.175 Attached to nqn.2016-06.io.spdk:cnode1 00:17:03.175 Namespace ID: 1 size: 1GB 00:17:03.175 fused_ordering(0) 00:17:03.175 fused_ordering(1) 00:17:03.175 fused_ordering(2) 00:17:03.175 fused_ordering(3) 00:17:03.175 fused_ordering(4) 00:17:03.175 fused_ordering(5) 00:17:03.175 fused_ordering(6) 00:17:03.175 fused_ordering(7) 00:17:03.175 fused_ordering(8) 00:17:03.175 fused_ordering(9) 00:17:03.175 fused_ordering(10) 00:17:03.175 fused_ordering(11) 00:17:03.175 fused_ordering(12) 00:17:03.175 fused_ordering(13) 00:17:03.175 fused_ordering(14) 00:17:03.175 fused_ordering(15) 00:17:03.175 fused_ordering(16) 00:17:03.175 fused_ordering(17) 00:17:03.175 fused_ordering(18) 00:17:03.175 fused_ordering(19) 00:17:03.175 fused_ordering(20) 00:17:03.175 fused_ordering(21) 00:17:03.175 fused_ordering(22) 00:17:03.175 fused_ordering(23) 00:17:03.175 fused_ordering(24) 00:17:03.175 fused_ordering(25) 00:17:03.175 fused_ordering(26) 00:17:03.175 fused_ordering(27) 00:17:03.175 fused_ordering(28) 00:17:03.175 fused_ordering(29) 00:17:03.175 fused_ordering(30) 00:17:03.175 fused_ordering(31) 00:17:03.175 fused_ordering(32) 00:17:03.175 fused_ordering(33) 00:17:03.175 fused_ordering(34) 00:17:03.175 fused_ordering(35) 00:17:03.175 fused_ordering(36) 00:17:03.175 fused_ordering(37) 00:17:03.175 fused_ordering(38) 00:17:03.175 fused_ordering(39) 00:17:03.175 fused_ordering(40) 00:17:03.175 fused_ordering(41) 00:17:03.175 fused_ordering(42) 00:17:03.175 fused_ordering(43) 00:17:03.175 fused_ordering(44) 00:17:03.175 fused_ordering(45) 00:17:03.175 fused_ordering(46) 00:17:03.175 fused_ordering(47) 00:17:03.175 fused_ordering(48) 00:17:03.175 fused_ordering(49) 00:17:03.175 fused_ordering(50) 00:17:03.175 fused_ordering(51) 00:17:03.175 fused_ordering(52) 00:17:03.175 fused_ordering(53) 00:17:03.175 fused_ordering(54) 00:17:03.175 fused_ordering(55) 00:17:03.175 fused_ordering(56) 00:17:03.175 fused_ordering(57) 00:17:03.175 fused_ordering(58) 00:17:03.175 fused_ordering(59) 00:17:03.175 fused_ordering(60) 00:17:03.175 fused_ordering(61) 00:17:03.175 fused_ordering(62) 00:17:03.175 fused_ordering(63) 00:17:03.175 fused_ordering(64) 00:17:03.175 fused_ordering(65) 00:17:03.175 fused_ordering(66) 00:17:03.175 fused_ordering(67) 00:17:03.175 fused_ordering(68) 00:17:03.175 fused_ordering(69) 00:17:03.175 fused_ordering(70) 00:17:03.175 fused_ordering(71) 00:17:03.175 fused_ordering(72) 00:17:03.175 fused_ordering(73) 00:17:03.175 fused_ordering(74) 00:17:03.175 fused_ordering(75) 00:17:03.175 fused_ordering(76) 00:17:03.175 fused_ordering(77) 00:17:03.175 fused_ordering(78) 00:17:03.175 fused_ordering(79) 00:17:03.175 fused_ordering(80) 00:17:03.175 fused_ordering(81) 00:17:03.175 fused_ordering(82) 00:17:03.175 fused_ordering(83) 00:17:03.175 fused_ordering(84) 00:17:03.175 fused_ordering(85) 00:17:03.175 fused_ordering(86) 00:17:03.175 fused_ordering(87) 00:17:03.175 fused_ordering(88) 00:17:03.175 fused_ordering(89) 00:17:03.175 fused_ordering(90) 00:17:03.175 fused_ordering(91) 00:17:03.175 fused_ordering(92) 00:17:03.175 fused_ordering(93) 00:17:03.175 fused_ordering(94) 00:17:03.175 fused_ordering(95) 00:17:03.175 fused_ordering(96) 00:17:03.175 fused_ordering(97) 00:17:03.175 fused_ordering(98) 00:17:03.175 fused_ordering(99) 00:17:03.175 fused_ordering(100) 00:17:03.175 fused_ordering(101) 00:17:03.175 fused_ordering(102) 00:17:03.175 fused_ordering(103) 00:17:03.175 fused_ordering(104) 00:17:03.175 fused_ordering(105) 00:17:03.175 fused_ordering(106) 00:17:03.175 fused_ordering(107) 00:17:03.175 fused_ordering(108) 00:17:03.175 fused_ordering(109) 00:17:03.175 fused_ordering(110) 00:17:03.175 fused_ordering(111) 00:17:03.175 fused_ordering(112) 00:17:03.175 fused_ordering(113) 00:17:03.175 fused_ordering(114) 00:17:03.175 fused_ordering(115) 00:17:03.175 fused_ordering(116) 00:17:03.175 fused_ordering(117) 00:17:03.175 fused_ordering(118) 00:17:03.175 fused_ordering(119) 00:17:03.175 fused_ordering(120) 00:17:03.175 fused_ordering(121) 00:17:03.175 fused_ordering(122) 00:17:03.175 fused_ordering(123) 00:17:03.175 fused_ordering(124) 00:17:03.175 fused_ordering(125) 00:17:03.175 fused_ordering(126) 00:17:03.175 fused_ordering(127) 00:17:03.175 fused_ordering(128) 00:17:03.175 fused_ordering(129) 00:17:03.175 fused_ordering(130) 00:17:03.175 fused_ordering(131) 00:17:03.175 fused_ordering(132) 00:17:03.175 fused_ordering(133) 00:17:03.175 fused_ordering(134) 00:17:03.175 fused_ordering(135) 00:17:03.175 fused_ordering(136) 00:17:03.175 fused_ordering(137) 00:17:03.175 fused_ordering(138) 00:17:03.175 fused_ordering(139) 00:17:03.175 fused_ordering(140) 00:17:03.175 fused_ordering(141) 00:17:03.175 fused_ordering(142) 00:17:03.175 fused_ordering(143) 00:17:03.175 fused_ordering(144) 00:17:03.175 fused_ordering(145) 00:17:03.175 fused_ordering(146) 00:17:03.175 fused_ordering(147) 00:17:03.175 fused_ordering(148) 00:17:03.175 fused_ordering(149) 00:17:03.176 fused_ordering(150) 00:17:03.176 fused_ordering(151) 00:17:03.176 fused_ordering(152) 00:17:03.176 fused_ordering(153) 00:17:03.176 fused_ordering(154) 00:17:03.176 fused_ordering(155) 00:17:03.176 fused_ordering(156) 00:17:03.176 fused_ordering(157) 00:17:03.176 fused_ordering(158) 00:17:03.176 fused_ordering(159) 00:17:03.176 fused_ordering(160) 00:17:03.176 fused_ordering(161) 00:17:03.176 fused_ordering(162) 00:17:03.176 fused_ordering(163) 00:17:03.176 fused_ordering(164) 00:17:03.176 fused_ordering(165) 00:17:03.176 fused_ordering(166) 00:17:03.176 fused_ordering(167) 00:17:03.176 fused_ordering(168) 00:17:03.176 fused_ordering(169) 00:17:03.176 fused_ordering(170) 00:17:03.176 fused_ordering(171) 00:17:03.176 fused_ordering(172) 00:17:03.176 fused_ordering(173) 00:17:03.176 fused_ordering(174) 00:17:03.176 fused_ordering(175) 00:17:03.176 fused_ordering(176) 00:17:03.176 fused_ordering(177) 00:17:03.176 fused_ordering(178) 00:17:03.176 fused_ordering(179) 00:17:03.176 fused_ordering(180) 00:17:03.176 fused_ordering(181) 00:17:03.176 fused_ordering(182) 00:17:03.176 fused_ordering(183) 00:17:03.176 fused_ordering(184) 00:17:03.176 fused_ordering(185) 00:17:03.176 fused_ordering(186) 00:17:03.176 fused_ordering(187) 00:17:03.176 fused_ordering(188) 00:17:03.176 fused_ordering(189) 00:17:03.176 fused_ordering(190) 00:17:03.176 fused_ordering(191) 00:17:03.176 fused_ordering(192) 00:17:03.176 fused_ordering(193) 00:17:03.176 fused_ordering(194) 00:17:03.176 fused_ordering(195) 00:17:03.176 fused_ordering(196) 00:17:03.176 fused_ordering(197) 00:17:03.176 fused_ordering(198) 00:17:03.176 fused_ordering(199) 00:17:03.176 fused_ordering(200) 00:17:03.176 fused_ordering(201) 00:17:03.176 fused_ordering(202) 00:17:03.176 fused_ordering(203) 00:17:03.176 fused_ordering(204) 00:17:03.176 fused_ordering(205) 00:17:03.435 fused_ordering(206) 00:17:03.435 fused_ordering(207) 00:17:03.435 fused_ordering(208) 00:17:03.435 fused_ordering(209) 00:17:03.435 fused_ordering(210) 00:17:03.435 fused_ordering(211) 00:17:03.435 fused_ordering(212) 00:17:03.435 fused_ordering(213) 00:17:03.435 fused_ordering(214) 00:17:03.435 fused_ordering(215) 00:17:03.435 fused_ordering(216) 00:17:03.435 fused_ordering(217) 00:17:03.435 fused_ordering(218) 00:17:03.435 fused_ordering(219) 00:17:03.435 fused_ordering(220) 00:17:03.435 fused_ordering(221) 00:17:03.435 fused_ordering(222) 00:17:03.435 fused_ordering(223) 00:17:03.435 fused_ordering(224) 00:17:03.435 fused_ordering(225) 00:17:03.435 fused_ordering(226) 00:17:03.435 fused_ordering(227) 00:17:03.435 fused_ordering(228) 00:17:03.435 fused_ordering(229) 00:17:03.435 fused_ordering(230) 00:17:03.435 fused_ordering(231) 00:17:03.435 fused_ordering(232) 00:17:03.435 fused_ordering(233) 00:17:03.435 fused_ordering(234) 00:17:03.435 fused_ordering(235) 00:17:03.435 fused_ordering(236) 00:17:03.435 fused_ordering(237) 00:17:03.435 fused_ordering(238) 00:17:03.435 fused_ordering(239) 00:17:03.435 fused_ordering(240) 00:17:03.435 fused_ordering(241) 00:17:03.435 fused_ordering(242) 00:17:03.435 fused_ordering(243) 00:17:03.435 fused_ordering(244) 00:17:03.435 fused_ordering(245) 00:17:03.435 fused_ordering(246) 00:17:03.435 fused_ordering(247) 00:17:03.435 fused_ordering(248) 00:17:03.435 fused_ordering(249) 00:17:03.435 fused_ordering(250) 00:17:03.435 fused_ordering(251) 00:17:03.435 fused_ordering(252) 00:17:03.435 fused_ordering(253) 00:17:03.435 fused_ordering(254) 00:17:03.435 fused_ordering(255) 00:17:03.435 fused_ordering(256) 00:17:03.435 fused_ordering(257) 00:17:03.435 fused_ordering(258) 00:17:03.435 fused_ordering(259) 00:17:03.435 fused_ordering(260) 00:17:03.435 fused_ordering(261) 00:17:03.435 fused_ordering(262) 00:17:03.435 fused_ordering(263) 00:17:03.435 fused_ordering(264) 00:17:03.435 fused_ordering(265) 00:17:03.435 fused_ordering(266) 00:17:03.435 fused_ordering(267) 00:17:03.435 fused_ordering(268) 00:17:03.435 fused_ordering(269) 00:17:03.435 fused_ordering(270) 00:17:03.435 fused_ordering(271) 00:17:03.435 fused_ordering(272) 00:17:03.435 fused_ordering(273) 00:17:03.435 fused_ordering(274) 00:17:03.435 fused_ordering(275) 00:17:03.435 fused_ordering(276) 00:17:03.435 fused_ordering(277) 00:17:03.435 fused_ordering(278) 00:17:03.435 fused_ordering(279) 00:17:03.435 fused_ordering(280) 00:17:03.435 fused_ordering(281) 00:17:03.435 fused_ordering(282) 00:17:03.435 fused_ordering(283) 00:17:03.435 fused_ordering(284) 00:17:03.435 fused_ordering(285) 00:17:03.435 fused_ordering(286) 00:17:03.435 fused_ordering(287) 00:17:03.435 fused_ordering(288) 00:17:03.435 fused_ordering(289) 00:17:03.435 fused_ordering(290) 00:17:03.435 fused_ordering(291) 00:17:03.435 fused_ordering(292) 00:17:03.435 fused_ordering(293) 00:17:03.435 fused_ordering(294) 00:17:03.435 fused_ordering(295) 00:17:03.435 fused_ordering(296) 00:17:03.435 fused_ordering(297) 00:17:03.435 fused_ordering(298) 00:17:03.435 fused_ordering(299) 00:17:03.435 fused_ordering(300) 00:17:03.435 fused_ordering(301) 00:17:03.435 fused_ordering(302) 00:17:03.435 fused_ordering(303) 00:17:03.435 fused_ordering(304) 00:17:03.435 fused_ordering(305) 00:17:03.435 fused_ordering(306) 00:17:03.435 fused_ordering(307) 00:17:03.435 fused_ordering(308) 00:17:03.435 fused_ordering(309) 00:17:03.435 fused_ordering(310) 00:17:03.435 fused_ordering(311) 00:17:03.435 fused_ordering(312) 00:17:03.435 fused_ordering(313) 00:17:03.435 fused_ordering(314) 00:17:03.435 fused_ordering(315) 00:17:03.435 fused_ordering(316) 00:17:03.435 fused_ordering(317) 00:17:03.435 fused_ordering(318) 00:17:03.435 fused_ordering(319) 00:17:03.435 fused_ordering(320) 00:17:03.435 fused_ordering(321) 00:17:03.435 fused_ordering(322) 00:17:03.435 fused_ordering(323) 00:17:03.435 fused_ordering(324) 00:17:03.435 fused_ordering(325) 00:17:03.435 fused_ordering(326) 00:17:03.435 fused_ordering(327) 00:17:03.435 fused_ordering(328) 00:17:03.435 fused_ordering(329) 00:17:03.435 fused_ordering(330) 00:17:03.435 fused_ordering(331) 00:17:03.435 fused_ordering(332) 00:17:03.435 fused_ordering(333) 00:17:03.435 fused_ordering(334) 00:17:03.435 fused_ordering(335) 00:17:03.435 fused_ordering(336) 00:17:03.435 fused_ordering(337) 00:17:03.435 fused_ordering(338) 00:17:03.435 fused_ordering(339) 00:17:03.435 fused_ordering(340) 00:17:03.435 fused_ordering(341) 00:17:03.435 fused_ordering(342) 00:17:03.435 fused_ordering(343) 00:17:03.435 fused_ordering(344) 00:17:03.435 fused_ordering(345) 00:17:03.435 fused_ordering(346) 00:17:03.435 fused_ordering(347) 00:17:03.435 fused_ordering(348) 00:17:03.435 fused_ordering(349) 00:17:03.435 fused_ordering(350) 00:17:03.435 fused_ordering(351) 00:17:03.435 fused_ordering(352) 00:17:03.435 fused_ordering(353) 00:17:03.435 fused_ordering(354) 00:17:03.435 fused_ordering(355) 00:17:03.435 fused_ordering(356) 00:17:03.435 fused_ordering(357) 00:17:03.435 fused_ordering(358) 00:17:03.435 fused_ordering(359) 00:17:03.435 fused_ordering(360) 00:17:03.435 fused_ordering(361) 00:17:03.435 fused_ordering(362) 00:17:03.435 fused_ordering(363) 00:17:03.435 fused_ordering(364) 00:17:03.435 fused_ordering(365) 00:17:03.435 fused_ordering(366) 00:17:03.435 fused_ordering(367) 00:17:03.435 fused_ordering(368) 00:17:03.435 fused_ordering(369) 00:17:03.435 fused_ordering(370) 00:17:03.435 fused_ordering(371) 00:17:03.435 fused_ordering(372) 00:17:03.435 fused_ordering(373) 00:17:03.435 fused_ordering(374) 00:17:03.435 fused_ordering(375) 00:17:03.435 fused_ordering(376) 00:17:03.435 fused_ordering(377) 00:17:03.435 fused_ordering(378) 00:17:03.435 fused_ordering(379) 00:17:03.435 fused_ordering(380) 00:17:03.435 fused_ordering(381) 00:17:03.435 fused_ordering(382) 00:17:03.435 fused_ordering(383) 00:17:03.435 fused_ordering(384) 00:17:03.435 fused_ordering(385) 00:17:03.435 fused_ordering(386) 00:17:03.435 fused_ordering(387) 00:17:03.435 fused_ordering(388) 00:17:03.435 fused_ordering(389) 00:17:03.435 fused_ordering(390) 00:17:03.435 fused_ordering(391) 00:17:03.435 fused_ordering(392) 00:17:03.435 fused_ordering(393) 00:17:03.435 fused_ordering(394) 00:17:03.435 fused_ordering(395) 00:17:03.435 fused_ordering(396) 00:17:03.435 fused_ordering(397) 00:17:03.435 fused_ordering(398) 00:17:03.435 fused_ordering(399) 00:17:03.435 fused_ordering(400) 00:17:03.435 fused_ordering(401) 00:17:03.435 fused_ordering(402) 00:17:03.435 fused_ordering(403) 00:17:03.435 fused_ordering(404) 00:17:03.435 fused_ordering(405) 00:17:03.435 fused_ordering(406) 00:17:03.435 fused_ordering(407) 00:17:03.435 fused_ordering(408) 00:17:03.435 fused_ordering(409) 00:17:03.435 fused_ordering(410) 00:17:04.003 fused_ordering(411) 00:17:04.003 fused_ordering(412) 00:17:04.003 fused_ordering(413) 00:17:04.003 fused_ordering(414) 00:17:04.003 fused_ordering(415) 00:17:04.003 fused_ordering(416) 00:17:04.003 fused_ordering(417) 00:17:04.003 fused_ordering(418) 00:17:04.003 fused_ordering(419) 00:17:04.003 fused_ordering(420) 00:17:04.003 fused_ordering(421) 00:17:04.003 fused_ordering(422) 00:17:04.003 fused_ordering(423) 00:17:04.003 fused_ordering(424) 00:17:04.003 fused_ordering(425) 00:17:04.003 fused_ordering(426) 00:17:04.003 fused_ordering(427) 00:17:04.003 fused_ordering(428) 00:17:04.003 fused_ordering(429) 00:17:04.003 fused_ordering(430) 00:17:04.003 fused_ordering(431) 00:17:04.003 fused_ordering(432) 00:17:04.003 fused_ordering(433) 00:17:04.003 fused_ordering(434) 00:17:04.003 fused_ordering(435) 00:17:04.003 fused_ordering(436) 00:17:04.003 fused_ordering(437) 00:17:04.003 fused_ordering(438) 00:17:04.003 fused_ordering(439) 00:17:04.003 fused_ordering(440) 00:17:04.003 fused_ordering(441) 00:17:04.003 fused_ordering(442) 00:17:04.003 fused_ordering(443) 00:17:04.003 fused_ordering(444) 00:17:04.003 fused_ordering(445) 00:17:04.003 fused_ordering(446) 00:17:04.003 fused_ordering(447) 00:17:04.003 fused_ordering(448) 00:17:04.003 fused_ordering(449) 00:17:04.003 fused_ordering(450) 00:17:04.003 fused_ordering(451) 00:17:04.003 fused_ordering(452) 00:17:04.003 fused_ordering(453) 00:17:04.003 fused_ordering(454) 00:17:04.003 fused_ordering(455) 00:17:04.003 fused_ordering(456) 00:17:04.003 fused_ordering(457) 00:17:04.003 fused_ordering(458) 00:17:04.003 fused_ordering(459) 00:17:04.003 fused_ordering(460) 00:17:04.003 fused_ordering(461) 00:17:04.003 fused_ordering(462) 00:17:04.003 fused_ordering(463) 00:17:04.003 fused_ordering(464) 00:17:04.003 fused_ordering(465) 00:17:04.003 fused_ordering(466) 00:17:04.003 fused_ordering(467) 00:17:04.003 fused_ordering(468) 00:17:04.003 fused_ordering(469) 00:17:04.003 fused_ordering(470) 00:17:04.003 fused_ordering(471) 00:17:04.003 fused_ordering(472) 00:17:04.003 fused_ordering(473) 00:17:04.003 fused_ordering(474) 00:17:04.003 fused_ordering(475) 00:17:04.003 fused_ordering(476) 00:17:04.003 fused_ordering(477) 00:17:04.003 fused_ordering(478) 00:17:04.003 fused_ordering(479) 00:17:04.003 fused_ordering(480) 00:17:04.003 fused_ordering(481) 00:17:04.003 fused_ordering(482) 00:17:04.003 fused_ordering(483) 00:17:04.003 fused_ordering(484) 00:17:04.003 fused_ordering(485) 00:17:04.003 fused_ordering(486) 00:17:04.003 fused_ordering(487) 00:17:04.003 fused_ordering(488) 00:17:04.003 fused_ordering(489) 00:17:04.003 fused_ordering(490) 00:17:04.003 fused_ordering(491) 00:17:04.003 fused_ordering(492) 00:17:04.003 fused_ordering(493) 00:17:04.003 fused_ordering(494) 00:17:04.003 fused_ordering(495) 00:17:04.003 fused_ordering(496) 00:17:04.003 fused_ordering(497) 00:17:04.003 fused_ordering(498) 00:17:04.003 fused_ordering(499) 00:17:04.003 fused_ordering(500) 00:17:04.003 fused_ordering(501) 00:17:04.003 fused_ordering(502) 00:17:04.003 fused_ordering(503) 00:17:04.003 fused_ordering(504) 00:17:04.003 fused_ordering(505) 00:17:04.003 fused_ordering(506) 00:17:04.003 fused_ordering(507) 00:17:04.003 fused_ordering(508) 00:17:04.003 fused_ordering(509) 00:17:04.003 fused_ordering(510) 00:17:04.003 fused_ordering(511) 00:17:04.003 fused_ordering(512) 00:17:04.003 fused_ordering(513) 00:17:04.003 fused_ordering(514) 00:17:04.003 fused_ordering(515) 00:17:04.003 fused_ordering(516) 00:17:04.003 fused_ordering(517) 00:17:04.003 fused_ordering(518) 00:17:04.003 fused_ordering(519) 00:17:04.003 fused_ordering(520) 00:17:04.003 fused_ordering(521) 00:17:04.003 fused_ordering(522) 00:17:04.003 fused_ordering(523) 00:17:04.003 fused_ordering(524) 00:17:04.003 fused_ordering(525) 00:17:04.003 fused_ordering(526) 00:17:04.003 fused_ordering(527) 00:17:04.003 fused_ordering(528) 00:17:04.003 fused_ordering(529) 00:17:04.003 fused_ordering(530) 00:17:04.003 fused_ordering(531) 00:17:04.003 fused_ordering(532) 00:17:04.003 fused_ordering(533) 00:17:04.003 fused_ordering(534) 00:17:04.003 fused_ordering(535) 00:17:04.003 fused_ordering(536) 00:17:04.003 fused_ordering(537) 00:17:04.003 fused_ordering(538) 00:17:04.003 fused_ordering(539) 00:17:04.003 fused_ordering(540) 00:17:04.003 fused_ordering(541) 00:17:04.003 fused_ordering(542) 00:17:04.003 fused_ordering(543) 00:17:04.003 fused_ordering(544) 00:17:04.003 fused_ordering(545) 00:17:04.003 fused_ordering(546) 00:17:04.003 fused_ordering(547) 00:17:04.003 fused_ordering(548) 00:17:04.003 fused_ordering(549) 00:17:04.003 fused_ordering(550) 00:17:04.003 fused_ordering(551) 00:17:04.003 fused_ordering(552) 00:17:04.003 fused_ordering(553) 00:17:04.003 fused_ordering(554) 00:17:04.003 fused_ordering(555) 00:17:04.003 fused_ordering(556) 00:17:04.003 fused_ordering(557) 00:17:04.003 fused_ordering(558) 00:17:04.003 fused_ordering(559) 00:17:04.003 fused_ordering(560) 00:17:04.003 fused_ordering(561) 00:17:04.003 fused_ordering(562) 00:17:04.003 fused_ordering(563) 00:17:04.003 fused_ordering(564) 00:17:04.003 fused_ordering(565) 00:17:04.003 fused_ordering(566) 00:17:04.003 fused_ordering(567) 00:17:04.003 fused_ordering(568) 00:17:04.003 fused_ordering(569) 00:17:04.003 fused_ordering(570) 00:17:04.003 fused_ordering(571) 00:17:04.003 fused_ordering(572) 00:17:04.003 fused_ordering(573) 00:17:04.003 fused_ordering(574) 00:17:04.003 fused_ordering(575) 00:17:04.003 fused_ordering(576) 00:17:04.003 fused_ordering(577) 00:17:04.003 fused_ordering(578) 00:17:04.003 fused_ordering(579) 00:17:04.003 fused_ordering(580) 00:17:04.003 fused_ordering(581) 00:17:04.003 fused_ordering(582) 00:17:04.003 fused_ordering(583) 00:17:04.003 fused_ordering(584) 00:17:04.003 fused_ordering(585) 00:17:04.003 fused_ordering(586) 00:17:04.003 fused_ordering(587) 00:17:04.003 fused_ordering(588) 00:17:04.003 fused_ordering(589) 00:17:04.003 fused_ordering(590) 00:17:04.003 fused_ordering(591) 00:17:04.003 fused_ordering(592) 00:17:04.003 fused_ordering(593) 00:17:04.003 fused_ordering(594) 00:17:04.003 fused_ordering(595) 00:17:04.003 fused_ordering(596) 00:17:04.003 fused_ordering(597) 00:17:04.003 fused_ordering(598) 00:17:04.003 fused_ordering(599) 00:17:04.003 fused_ordering(600) 00:17:04.003 fused_ordering(601) 00:17:04.003 fused_ordering(602) 00:17:04.003 fused_ordering(603) 00:17:04.003 fused_ordering(604) 00:17:04.004 fused_ordering(605) 00:17:04.004 fused_ordering(606) 00:17:04.004 fused_ordering(607) 00:17:04.004 fused_ordering(608) 00:17:04.004 fused_ordering(609) 00:17:04.004 fused_ordering(610) 00:17:04.004 fused_ordering(611) 00:17:04.004 fused_ordering(612) 00:17:04.004 fused_ordering(613) 00:17:04.004 fused_ordering(614) 00:17:04.004 fused_ordering(615) 00:17:04.263 fused_ordering(616) 00:17:04.263 fused_ordering(617) 00:17:04.263 fused_ordering(618) 00:17:04.263 fused_ordering(619) 00:17:04.263 fused_ordering(620) 00:17:04.263 fused_ordering(621) 00:17:04.263 fused_ordering(622) 00:17:04.263 fused_ordering(623) 00:17:04.263 fused_ordering(624) 00:17:04.263 fused_ordering(625) 00:17:04.263 fused_ordering(626) 00:17:04.263 fused_ordering(627) 00:17:04.263 fused_ordering(628) 00:17:04.263 fused_ordering(629) 00:17:04.263 fused_ordering(630) 00:17:04.263 fused_ordering(631) 00:17:04.263 fused_ordering(632) 00:17:04.263 fused_ordering(633) 00:17:04.263 fused_ordering(634) 00:17:04.263 fused_ordering(635) 00:17:04.263 fused_ordering(636) 00:17:04.263 fused_ordering(637) 00:17:04.263 fused_ordering(638) 00:17:04.263 fused_ordering(639) 00:17:04.263 fused_ordering(640) 00:17:04.263 fused_ordering(641) 00:17:04.263 fused_ordering(642) 00:17:04.263 fused_ordering(643) 00:17:04.263 fused_ordering(644) 00:17:04.263 fused_ordering(645) 00:17:04.263 fused_ordering(646) 00:17:04.263 fused_ordering(647) 00:17:04.263 fused_ordering(648) 00:17:04.263 fused_ordering(649) 00:17:04.263 fused_ordering(650) 00:17:04.263 fused_ordering(651) 00:17:04.263 fused_ordering(652) 00:17:04.263 fused_ordering(653) 00:17:04.263 fused_ordering(654) 00:17:04.263 fused_ordering(655) 00:17:04.263 fused_ordering(656) 00:17:04.263 fused_ordering(657) 00:17:04.263 fused_ordering(658) 00:17:04.263 fused_ordering(659) 00:17:04.263 fused_ordering(660) 00:17:04.263 fused_ordering(661) 00:17:04.263 fused_ordering(662) 00:17:04.263 fused_ordering(663) 00:17:04.263 fused_ordering(664) 00:17:04.263 fused_ordering(665) 00:17:04.263 fused_ordering(666) 00:17:04.263 fused_ordering(667) 00:17:04.263 fused_ordering(668) 00:17:04.263 fused_ordering(669) 00:17:04.263 fused_ordering(670) 00:17:04.263 fused_ordering(671) 00:17:04.263 fused_ordering(672) 00:17:04.263 fused_ordering(673) 00:17:04.263 fused_ordering(674) 00:17:04.263 fused_ordering(675) 00:17:04.263 fused_ordering(676) 00:17:04.263 fused_ordering(677) 00:17:04.263 fused_ordering(678) 00:17:04.263 fused_ordering(679) 00:17:04.263 fused_ordering(680) 00:17:04.263 fused_ordering(681) 00:17:04.263 fused_ordering(682) 00:17:04.263 fused_ordering(683) 00:17:04.263 fused_ordering(684) 00:17:04.263 fused_ordering(685) 00:17:04.263 fused_ordering(686) 00:17:04.263 fused_ordering(687) 00:17:04.263 fused_ordering(688) 00:17:04.263 fused_ordering(689) 00:17:04.263 fused_ordering(690) 00:17:04.263 fused_ordering(691) 00:17:04.263 fused_ordering(692) 00:17:04.263 fused_ordering(693) 00:17:04.263 fused_ordering(694) 00:17:04.263 fused_ordering(695) 00:17:04.263 fused_ordering(696) 00:17:04.263 fused_ordering(697) 00:17:04.263 fused_ordering(698) 00:17:04.263 fused_ordering(699) 00:17:04.263 fused_ordering(700) 00:17:04.263 fused_ordering(701) 00:17:04.263 fused_ordering(702) 00:17:04.263 fused_ordering(703) 00:17:04.263 fused_ordering(704) 00:17:04.263 fused_ordering(705) 00:17:04.263 fused_ordering(706) 00:17:04.263 fused_ordering(707) 00:17:04.263 fused_ordering(708) 00:17:04.263 fused_ordering(709) 00:17:04.263 fused_ordering(710) 00:17:04.263 fused_ordering(711) 00:17:04.263 fused_ordering(712) 00:17:04.263 fused_ordering(713) 00:17:04.263 fused_ordering(714) 00:17:04.263 fused_ordering(715) 00:17:04.263 fused_ordering(716) 00:17:04.263 fused_ordering(717) 00:17:04.263 fused_ordering(718) 00:17:04.263 fused_ordering(719) 00:17:04.263 fused_ordering(720) 00:17:04.263 fused_ordering(721) 00:17:04.263 fused_ordering(722) 00:17:04.263 fused_ordering(723) 00:17:04.263 fused_ordering(724) 00:17:04.263 fused_ordering(725) 00:17:04.263 fused_ordering(726) 00:17:04.263 fused_ordering(727) 00:17:04.263 fused_ordering(728) 00:17:04.263 fused_ordering(729) 00:17:04.263 fused_ordering(730) 00:17:04.263 fused_ordering(731) 00:17:04.263 fused_ordering(732) 00:17:04.263 fused_ordering(733) 00:17:04.263 fused_ordering(734) 00:17:04.263 fused_ordering(735) 00:17:04.263 fused_ordering(736) 00:17:04.263 fused_ordering(737) 00:17:04.263 fused_ordering(738) 00:17:04.263 fused_ordering(739) 00:17:04.263 fused_ordering(740) 00:17:04.263 fused_ordering(741) 00:17:04.263 fused_ordering(742) 00:17:04.263 fused_ordering(743) 00:17:04.263 fused_ordering(744) 00:17:04.263 fused_ordering(745) 00:17:04.263 fused_ordering(746) 00:17:04.263 fused_ordering(747) 00:17:04.263 fused_ordering(748) 00:17:04.263 fused_ordering(749) 00:17:04.263 fused_ordering(750) 00:17:04.263 fused_ordering(751) 00:17:04.263 fused_ordering(752) 00:17:04.263 fused_ordering(753) 00:17:04.263 fused_ordering(754) 00:17:04.263 fused_ordering(755) 00:17:04.263 fused_ordering(756) 00:17:04.263 fused_ordering(757) 00:17:04.263 fused_ordering(758) 00:17:04.263 fused_ordering(759) 00:17:04.263 fused_ordering(760) 00:17:04.263 fused_ordering(761) 00:17:04.263 fused_ordering(762) 00:17:04.263 fused_ordering(763) 00:17:04.263 fused_ordering(764) 00:17:04.263 fused_ordering(765) 00:17:04.263 fused_ordering(766) 00:17:04.263 fused_ordering(767) 00:17:04.263 fused_ordering(768) 00:17:04.263 fused_ordering(769) 00:17:04.263 fused_ordering(770) 00:17:04.263 fused_ordering(771) 00:17:04.263 fused_ordering(772) 00:17:04.263 fused_ordering(773) 00:17:04.263 fused_ordering(774) 00:17:04.263 fused_ordering(775) 00:17:04.263 fused_ordering(776) 00:17:04.263 fused_ordering(777) 00:17:04.263 fused_ordering(778) 00:17:04.263 fused_ordering(779) 00:17:04.263 fused_ordering(780) 00:17:04.263 fused_ordering(781) 00:17:04.263 fused_ordering(782) 00:17:04.263 fused_ordering(783) 00:17:04.263 fused_ordering(784) 00:17:04.263 fused_ordering(785) 00:17:04.263 fused_ordering(786) 00:17:04.263 fused_ordering(787) 00:17:04.263 fused_ordering(788) 00:17:04.263 fused_ordering(789) 00:17:04.263 fused_ordering(790) 00:17:04.263 fused_ordering(791) 00:17:04.263 fused_ordering(792) 00:17:04.263 fused_ordering(793) 00:17:04.263 fused_ordering(794) 00:17:04.263 fused_ordering(795) 00:17:04.263 fused_ordering(796) 00:17:04.263 fused_ordering(797) 00:17:04.263 fused_ordering(798) 00:17:04.263 fused_ordering(799) 00:17:04.263 fused_ordering(800) 00:17:04.263 fused_ordering(801) 00:17:04.263 fused_ordering(802) 00:17:04.263 fused_ordering(803) 00:17:04.263 fused_ordering(804) 00:17:04.263 fused_ordering(805) 00:17:04.263 fused_ordering(806) 00:17:04.263 fused_ordering(807) 00:17:04.263 fused_ordering(808) 00:17:04.263 fused_ordering(809) 00:17:04.263 fused_ordering(810) 00:17:04.263 fused_ordering(811) 00:17:04.263 fused_ordering(812) 00:17:04.263 fused_ordering(813) 00:17:04.263 fused_ordering(814) 00:17:04.263 fused_ordering(815) 00:17:04.263 fused_ordering(816) 00:17:04.263 fused_ordering(817) 00:17:04.263 fused_ordering(818) 00:17:04.263 fused_ordering(819) 00:17:04.263 fused_ordering(820) 00:17:04.522 fused_ordering(821) 00:17:04.522 fused_ordering(822) 00:17:04.522 fused_ordering(823) 00:17:04.522 fused_ordering(824) 00:17:04.522 fused_ordering(825) 00:17:04.522 fused_ordering(826) 00:17:04.522 fused_ordering(827) 00:17:04.522 fused_ordering(828) 00:17:04.522 fused_ordering(829) 00:17:04.522 fused_ordering(830) 00:17:04.522 fused_ordering(831) 00:17:04.522 fused_ordering(832) 00:17:04.522 fused_ordering(833) 00:17:04.522 fused_ordering(834) 00:17:04.522 fused_ordering(835) 00:17:04.522 fused_ordering(836) 00:17:04.522 fused_ordering(837) 00:17:04.522 fused_ordering(838) 00:17:04.522 fused_ordering(839) 00:17:04.522 fused_ordering(840) 00:17:04.522 fused_ordering(841) 00:17:04.522 fused_ordering(842) 00:17:04.522 fused_ordering(843) 00:17:04.522 fused_ordering(844) 00:17:04.522 fused_ordering(845) 00:17:04.522 fused_ordering(846) 00:17:04.522 fused_ordering(847) 00:17:04.522 fused_ordering(848) 00:17:04.522 fused_ordering(849) 00:17:04.522 fused_ordering(850) 00:17:04.522 fused_ordering(851) 00:17:04.522 fused_ordering(852) 00:17:04.522 fused_ordering(853) 00:17:04.522 fused_ordering(854) 00:17:04.522 fused_ordering(855) 00:17:04.522 fused_ordering(856) 00:17:04.522 fused_ordering(857) 00:17:04.522 fused_ordering(858) 00:17:04.522 fused_ordering(859) 00:17:04.522 fused_ordering(860) 00:17:04.522 fused_ordering(861) 00:17:04.522 fused_ordering(862) 00:17:04.522 fused_ordering(863) 00:17:04.522 fused_ordering(864) 00:17:04.522 fused_ordering(865) 00:17:04.522 fused_ordering(866) 00:17:04.522 fused_ordering(867) 00:17:04.522 fused_ordering(868) 00:17:04.522 fused_ordering(869) 00:17:04.522 fused_ordering(870) 00:17:04.522 fused_ordering(871) 00:17:04.522 fused_ordering(872) 00:17:04.522 fused_ordering(873) 00:17:04.522 fused_ordering(874) 00:17:04.522 fused_ordering(875) 00:17:04.522 fused_ordering(876) 00:17:04.522 fused_ordering(877) 00:17:04.522 fused_ordering(878) 00:17:04.522 fused_ordering(879) 00:17:04.522 fused_ordering(880) 00:17:04.522 fused_ordering(881) 00:17:04.522 fused_ordering(882) 00:17:04.522 fused_ordering(883) 00:17:04.523 fused_ordering(884) 00:17:04.523 fused_ordering(885) 00:17:04.523 fused_ordering(886) 00:17:04.523 fused_ordering(887) 00:17:04.523 fused_ordering(888) 00:17:04.523 fused_ordering(889) 00:17:04.523 fused_ordering(890) 00:17:04.523 fused_ordering(891) 00:17:04.523 fused_ordering(892) 00:17:04.523 fused_ordering(893) 00:17:04.523 fused_ordering(894) 00:17:04.523 fused_ordering(895) 00:17:04.523 fused_ordering(896) 00:17:04.523 fused_ordering(897) 00:17:04.523 fused_ordering(898) 00:17:04.523 fused_ordering(899) 00:17:04.523 fused_ordering(900) 00:17:04.523 fused_ordering(901) 00:17:04.523 fused_ordering(902) 00:17:04.523 fused_ordering(903) 00:17:04.523 fused_ordering(904) 00:17:04.523 fused_ordering(905) 00:17:04.523 fused_ordering(906) 00:17:04.523 fused_ordering(907) 00:17:04.523 fused_ordering(908) 00:17:04.523 fused_ordering(909) 00:17:04.523 fused_ordering(910) 00:17:04.523 fused_ordering(911) 00:17:04.523 fused_ordering(912) 00:17:04.523 fused_ordering(913) 00:17:04.523 fused_ordering(914) 00:17:04.523 fused_ordering(915) 00:17:04.523 fused_ordering(916) 00:17:04.523 fused_ordering(917) 00:17:04.523 fused_ordering(918) 00:17:04.523 fused_ordering(919) 00:17:04.523 fused_ordering(920) 00:17:04.523 fused_ordering(921) 00:17:04.523 fused_ordering(922) 00:17:04.523 fused_ordering(923) 00:17:04.523 fused_ordering(924) 00:17:04.523 fused_ordering(925) 00:17:04.523 fused_ordering(926) 00:17:04.523 fused_ordering(927) 00:17:04.523 fused_ordering(928) 00:17:04.523 fused_ordering(929) 00:17:04.523 fused_ordering(930) 00:17:04.523 fused_ordering(931) 00:17:04.523 fused_ordering(932) 00:17:04.523 fused_ordering(933) 00:17:04.523 fused_ordering(934) 00:17:04.523 fused_ordering(935) 00:17:04.523 fused_ordering(936) 00:17:04.523 fused_ordering(937) 00:17:04.523 fused_ordering(938) 00:17:04.523 fused_ordering(939) 00:17:04.523 fused_ordering(940) 00:17:04.523 fused_ordering(941) 00:17:04.523 fused_ordering(942) 00:17:04.523 fused_ordering(943) 00:17:04.523 fused_ordering(944) 00:17:04.523 fused_ordering(945) 00:17:04.523 fused_ordering(946) 00:17:04.523 fused_ordering(947) 00:17:04.523 fused_ordering(948) 00:17:04.523 fused_ordering(949) 00:17:04.523 fused_ordering(950) 00:17:04.523 fused_ordering(951) 00:17:04.523 fused_ordering(952) 00:17:04.523 fused_ordering(953) 00:17:04.523 fused_ordering(954) 00:17:04.523 fused_ordering(955) 00:17:04.523 fused_ordering(956) 00:17:04.523 fused_ordering(957) 00:17:04.523 fused_ordering(958) 00:17:04.523 fused_ordering(959) 00:17:04.523 fused_ordering(960) 00:17:04.523 fused_ordering(961) 00:17:04.523 fused_ordering(962) 00:17:04.523 fused_ordering(963) 00:17:04.523 fused_ordering(964) 00:17:04.523 fused_ordering(965) 00:17:04.523 fused_ordering(966) 00:17:04.523 fused_ordering(967) 00:17:04.523 fused_ordering(968) 00:17:04.523 fused_ordering(969) 00:17:04.523 fused_ordering(970) 00:17:04.523 fused_ordering(971) 00:17:04.523 fused_ordering(972) 00:17:04.523 fused_ordering(973) 00:17:04.523 fused_ordering(974) 00:17:04.523 fused_ordering(975) 00:17:04.523 fused_ordering(976) 00:17:04.523 fused_ordering(977) 00:17:04.523 fused_ordering(978) 00:17:04.523 fused_ordering(979) 00:17:04.523 fused_ordering(980) 00:17:04.523 fused_ordering(981) 00:17:04.523 fused_ordering(982) 00:17:04.523 fused_ordering(983) 00:17:04.523 fused_ordering(984) 00:17:04.523 fused_ordering(985) 00:17:04.523 fused_ordering(986) 00:17:04.523 fused_ordering(987) 00:17:04.523 fused_ordering(988) 00:17:04.523 fused_ordering(989) 00:17:04.523 fused_ordering(990) 00:17:04.523 fused_ordering(991) 00:17:04.523 fused_ordering(992) 00:17:04.523 fused_ordering(993) 00:17:04.523 fused_ordering(994) 00:17:04.523 fused_ordering(995) 00:17:04.523 fused_ordering(996) 00:17:04.523 fused_ordering(997) 00:17:04.523 fused_ordering(998) 00:17:04.523 fused_ordering(999) 00:17:04.523 fused_ordering(1000) 00:17:04.523 fused_ordering(1001) 00:17:04.523 fused_ordering(1002) 00:17:04.523 fused_ordering(1003) 00:17:04.523 fused_ordering(1004) 00:17:04.523 fused_ordering(1005) 00:17:04.523 fused_ordering(1006) 00:17:04.523 fused_ordering(1007) 00:17:04.523 fused_ordering(1008) 00:17:04.523 fused_ordering(1009) 00:17:04.523 fused_ordering(1010) 00:17:04.523 fused_ordering(1011) 00:17:04.523 fused_ordering(1012) 00:17:04.523 fused_ordering(1013) 00:17:04.523 fused_ordering(1014) 00:17:04.523 fused_ordering(1015) 00:17:04.523 fused_ordering(1016) 00:17:04.523 fused_ordering(1017) 00:17:04.523 fused_ordering(1018) 00:17:04.523 fused_ordering(1019) 00:17:04.523 fused_ordering(1020) 00:17:04.523 fused_ordering(1021) 00:17:04.523 fused_ordering(1022) 00:17:04.523 fused_ordering(1023) 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.782 rmmod nvme_tcp 00:17:04.782 rmmod nvme_fabrics 00:17:04.782 rmmod nvme_keyring 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 91901 ']' 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 91901 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 91901 ']' 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 91901 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91901 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:04.782 killing process with pid 91901 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91901' 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 91901 00:17:04.782 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 91901 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.041 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:17:05.300 ************************************ 00:17:05.300 END TEST nvmf_fused_ordering 00:17:05.300 ************************************ 00:17:05.300 00:17:05.300 real 0m4.258s 00:17:05.300 user 0m4.603s 00:17:05.300 sys 0m1.411s 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.300 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.300 18:37:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:05.300 18:37:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:05.300 18:37:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.300 18:37:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.300 ************************************ 00:17:05.300 START TEST nvmf_ns_masking 00:17:05.300 ************************************ 00:17:05.300 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:05.560 * Looking for test storage... 00:17:05.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:05.560 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.561 --rc genhtml_branch_coverage=1 00:17:05.561 --rc genhtml_function_coverage=1 00:17:05.561 --rc genhtml_legend=1 00:17:05.561 --rc geninfo_all_blocks=1 00:17:05.561 --rc geninfo_unexecuted_blocks=1 00:17:05.561 00:17:05.561 ' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.561 --rc genhtml_branch_coverage=1 00:17:05.561 --rc genhtml_function_coverage=1 00:17:05.561 --rc genhtml_legend=1 00:17:05.561 --rc geninfo_all_blocks=1 00:17:05.561 --rc geninfo_unexecuted_blocks=1 00:17:05.561 00:17:05.561 ' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.561 --rc genhtml_branch_coverage=1 00:17:05.561 --rc genhtml_function_coverage=1 00:17:05.561 --rc genhtml_legend=1 00:17:05.561 --rc geninfo_all_blocks=1 00:17:05.561 --rc geninfo_unexecuted_blocks=1 00:17:05.561 00:17:05.561 ' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.561 --rc genhtml_branch_coverage=1 00:17:05.561 --rc genhtml_function_coverage=1 00:17:05.561 --rc genhtml_legend=1 00:17:05.561 --rc geninfo_all_blocks=1 00:17:05.561 --rc geninfo_unexecuted_blocks=1 00:17:05.561 00:17:05.561 ' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.561 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ffa2b3cc-3591-4816-a266-db24e40f05e0 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=582d52a0-4e27-4e9f-887d-d3ec19a4211a 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=27f48191-bbac-4979-99f1-b74f64a52eab 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.561 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.562 Cannot find device "nvmf_init_br" 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.562 Cannot find device "nvmf_init_br2" 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:17:05.562 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.821 Cannot find device "nvmf_tgt_br" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.821 Cannot find device "nvmf_tgt_br2" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.821 Cannot find device "nvmf_init_br" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.821 Cannot find device "nvmf_init_br2" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.821 Cannot find device "nvmf_tgt_br" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.821 Cannot find device "nvmf_tgt_br2" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.821 Cannot find device "nvmf_br" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.821 Cannot find device "nvmf_init_if" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.821 Cannot find device "nvmf_init_if2" 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.821 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:06.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:17:06.080 00:17:06.080 --- 10.0.0.3 ping statistics --- 00:17:06.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.080 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:06.080 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:06.080 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:17:06.080 00:17:06.080 --- 10.0.0.4 ping statistics --- 00:17:06.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.080 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:06.080 00:17:06.080 --- 10.0.0.1 ping statistics --- 00:17:06.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.080 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:06.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:06.080 00:17:06.080 --- 10.0.0.2 ping statistics --- 00:17:06.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.080 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # return 0 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=92195 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 92195 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 92195 ']' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.080 18:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 [2024-12-14 18:37:21.755993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:06.080 [2024-12-14 18:37:21.756097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.339 [2024-12-14 18:37:21.899385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.339 [2024-12-14 18:37:21.994406] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.339 [2024-12-14 18:37:21.994469] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.339 [2024-12-14 18:37:21.994483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.339 [2024-12-14 18:37:21.994494] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.339 [2024-12-14 18:37:21.994503] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.339 [2024-12-14 18:37:21.994538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.275 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:07.534 [2024-12-14 18:37:23.138980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.534 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:07.534 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:07.534 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:07.792 Malloc1 00:17:07.792 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:08.051 Malloc2 00:17:08.051 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:08.309 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:08.568 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:08.830 [2024-12-14 18:37:24.432420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27f48191-bbac-4979-99f1-b74f64a52eab -a 10.0.0.3 -s 4420 -i 4 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:08.830 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.363 [ 0]:0x1 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e2d1c1cc131847b6afeb215d6f6a5dc5 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e2d1c1cc131847b6afeb215d6f6a5dc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.363 [ 0]:0x1 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.363 18:37:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e2d1c1cc131847b6afeb215d6f6a5dc5 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e2d1c1cc131847b6afeb215d6f6a5dc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.363 [ 1]:0x2 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.363 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.622 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:11.622 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.622 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:11.622 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.622 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.880 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27f48191-bbac-4979-99f1-b74f64a52eab -a 10.0.0.3 -s 4420 -i 4 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:12.138 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.671 [ 0]:0x2 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.671 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.671 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.672 [ 0]:0x1 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e2d1c1cc131847b6afeb215d6f6a5dc5 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e2d1c1cc131847b6afeb215d6f6a5dc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.672 [ 1]:0x2 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.672 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.930 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:14.930 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.930 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:15.189 [ 0]:0x2 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:15.189 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.448 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27f48191-bbac-4979-99f1-b74f64a52eab -a 10.0.0.3 -s 4420 -i 4 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:15.707 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.241 [ 0]:0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e2d1c1cc131847b6afeb215d6f6a5dc5 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e2d1c1cc131847b6afeb215d6f6a5dc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.241 [ 1]:0x2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.241 [ 0]:0x2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:18.241 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:18.810 [2024-12-14 18:37:34.265735] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:18.810 2024/12/14 18:37:34 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:17:18.810 request: 00:17:18.810 { 00:17:18.810 "method": "nvmf_ns_remove_host", 00:17:18.810 "params": { 00:17:18.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.810 "nsid": 2, 00:17:18.810 "host": "nqn.2016-06.io.spdk:host1" 00:17:18.810 } 00:17:18.810 } 00:17:18.810 Got JSON-RPC error response 00:17:18.810 GoRPCClient: error on JSON-RPC call 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.810 [ 0]:0x2 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=540ee851192243e6a577ac159af643e2 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 540ee851192243e6a577ac159af643e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=92585 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 92585 /var/tmp/host.sock 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 92585 ']' 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.810 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.810 [2024-12-14 18:37:34.533804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:18.810 [2024-12-14 18:37:34.533896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92585 ] 00:17:19.069 [2024-12-14 18:37:34.669516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.069 [2024-12-14 18:37:34.756378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.636 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.636 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:19.636 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.636 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:19.894 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ffa2b3cc-3591-4816-a266-db24e40f05e0 00:17:19.894 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:19.894 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FFA2B3CC35914816A266DB24E40F05E0 -i 00:17:20.153 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 582d52a0-4e27-4e9f-887d-d3ec19a4211a 00:17:20.153 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:20.153 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 582D52A04E274E9F887DD3EC19A4211A -i 00:17:20.411 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.670 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:20.929 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:20.929 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:21.188 nvme0n1 00:17:21.447 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:21.447 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:21.706 nvme1n2 00:17:21.706 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:21.706 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:21.706 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:21.706 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:21.706 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:21.965 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:21.965 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:21.965 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:21.965 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:22.224 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ffa2b3cc-3591-4816-a266-db24e40f05e0 == \f\f\a\2\b\3\c\c\-\3\5\9\1\-\4\8\1\6\-\a\2\6\6\-\d\b\2\4\e\4\0\f\0\5\e\0 ]] 00:17:22.224 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:22.224 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:22.224 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 582d52a0-4e27-4e9f-887d-d3ec19a4211a == \5\8\2\d\5\2\a\0\-\4\e\2\7\-\4\e\9\f\-\8\8\7\d\-\d\3\e\c\1\9\a\4\2\1\1\a ]] 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 92585 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 92585 ']' 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 92585 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92585 00:17:22.483 killing process with pid 92585 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92585' 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 92585 00:17:22.483 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 92585 00:17:23.081 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.364 rmmod nvme_tcp 00:17:23.364 rmmod nvme_fabrics 00:17:23.364 rmmod nvme_keyring 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 92195 ']' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 92195 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 92195 ']' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 92195 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92195 00:17:23.364 killing process with pid 92195 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92195' 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 92195 00:17:23.364 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 92195 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.641 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:17:23.900 00:17:23.900 real 0m18.533s 00:17:23.900 user 0m28.287s 00:17:23.900 sys 0m3.175s 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.900 ************************************ 00:17:23.900 END TEST nvmf_ns_masking 00:17:23.900 ************************************ 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.900 ************************************ 00:17:23.900 START TEST nvmf_vfio_user 00:17:23.900 ************************************ 00:17:23.900 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:24.160 * Looking for test storage... 00:17:24.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:24.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.160 --rc genhtml_branch_coverage=1 00:17:24.160 --rc genhtml_function_coverage=1 00:17:24.160 --rc genhtml_legend=1 00:17:24.160 --rc geninfo_all_blocks=1 00:17:24.160 --rc geninfo_unexecuted_blocks=1 00:17:24.160 00:17:24.160 ' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:24.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.160 --rc genhtml_branch_coverage=1 00:17:24.160 --rc genhtml_function_coverage=1 00:17:24.160 --rc genhtml_legend=1 00:17:24.160 --rc geninfo_all_blocks=1 00:17:24.160 --rc geninfo_unexecuted_blocks=1 00:17:24.160 00:17:24.160 ' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:24.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.160 --rc genhtml_branch_coverage=1 00:17:24.160 --rc genhtml_function_coverage=1 00:17:24.160 --rc genhtml_legend=1 00:17:24.160 --rc geninfo_all_blocks=1 00:17:24.160 --rc geninfo_unexecuted_blocks=1 00:17:24.160 00:17:24.160 ' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:24.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.160 --rc genhtml_branch_coverage=1 00:17:24.160 --rc genhtml_function_coverage=1 00:17:24.160 --rc genhtml_legend=1 00:17:24.160 --rc geninfo_all_blocks=1 00:17:24.160 --rc geninfo_unexecuted_blocks=1 00:17:24.160 00:17:24.160 ' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:24.160 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:24.161 Process pid: 92860 00:17:24.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=92860 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 92860' 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 92860 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 92860 ']' 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.161 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:24.419 [2024-12-14 18:37:39.934260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:24.419 [2024-12-14 18:37:39.935150] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.419 [2024-12-14 18:37:40.078975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.678 [2024-12-14 18:37:40.175252] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.678 [2024-12-14 18:37:40.175637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.678 [2024-12-14 18:37:40.175798] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.678 [2024-12-14 18:37:40.175958] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.678 [2024-12-14 18:37:40.176002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.678 [2024-12-14 18:37:40.176345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.678 [2024-12-14 18:37:40.176480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.678 [2024-12-14 18:37:40.176559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.678 [2024-12-14 18:37:40.176560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.614 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.614 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:25.614 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:26.549 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:26.549 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:26.549 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:26.549 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:26.549 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:26.808 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:27.067 Malloc1 00:17:27.067 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:27.327 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:27.586 18:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:27.845 18:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:27.845 18:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:27.845 18:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:28.103 Malloc2 00:17:28.103 18:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:28.671 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:28.930 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:29.192 [2024-12-14 18:37:44.692239] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:29.192 [2024-12-14 18:37:44.692293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93002 ] 00:17:29.192 [2024-12-14 18:37:44.832048] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:29.192 [2024-12-14 18:37:44.841059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:29.192 [2024-12-14 18:37:44.841105] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe190f2a000 00:17:29.192 [2024-12-14 18:37:44.842051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.843054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.844038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.845037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.846042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.847084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.848086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.849072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:29.192 [2024-12-14 18:37:44.850080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:29.192 [2024-12-14 18:37:44.850121] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe18fc82000 00:17:29.192 [2024-12-14 18:37:44.851305] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:29.192 [2024-12-14 18:37:44.869129] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:29.192 [2024-12-14 18:37:44.869168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:29.192 [2024-12-14 18:37:44.872233] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:29.192 [2024-12-14 18:37:44.872282] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:29.192 [2024-12-14 18:37:44.872362] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:29.192 [2024-12-14 18:37:44.872388] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:29.192 [2024-12-14 18:37:44.872395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:29.192 [2024-12-14 18:37:44.873214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:29.192 [2024-12-14 18:37:44.873237] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:29.192 [2024-12-14 18:37:44.873254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:29.192 [2024-12-14 18:37:44.874215] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:29.192 [2024-12-14 18:37:44.874237] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:29.192 [2024-12-14 18:37:44.874248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.875222] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:29.192 [2024-12-14 18:37:44.875243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.876224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:29.192 [2024-12-14 18:37:44.876245] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:29.192 [2024-12-14 18:37:44.876251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.876260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.876365] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:29.192 [2024-12-14 18:37:44.876371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.876376] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:29.192 [2024-12-14 18:37:44.877242] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:29.192 [2024-12-14 18:37:44.878228] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:29.192 [2024-12-14 18:37:44.879234] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:29.192 [2024-12-14 18:37:44.880227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:29.192 [2024-12-14 18:37:44.880862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:29.192 [2024-12-14 18:37:44.881271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:29.192 [2024-12-14 18:37:44.881288] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:29.192 [2024-12-14 18:37:44.881294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:29.192 [2024-12-14 18:37:44.881313] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:29.192 [2024-12-14 18:37:44.881328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:29.192 [2024-12-14 18:37:44.881348] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:29.192 [2024-12-14 18:37:44.881355] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:29.192 [2024-12-14 18:37:44.881359] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.192 [2024-12-14 18:37:44.881374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:29.192 [2024-12-14 18:37:44.881441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:29.192 [2024-12-14 18:37:44.881462] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:29.192 [2024-12-14 18:37:44.881468] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:29.192 [2024-12-14 18:37:44.881472] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:29.192 [2024-12-14 18:37:44.881476] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:29.192 [2024-12-14 18:37:44.881482] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:29.192 [2024-12-14 18:37:44.881487] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:29.193 [2024-12-14 18:37:44.881492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.193 [2024-12-14 18:37:44.881554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.193 [2024-12-14 18:37:44.881562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.193 [2024-12-14 18:37:44.881571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.193 [2024-12-14 18:37:44.881575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881631] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:29.193 [2024-12-14 18:37:44.881636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881732] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881768] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:29.193 [2024-12-14 18:37:44.881773] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:29.193 [2024-12-14 18:37:44.881777] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.881784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881810] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:29.193 [2024-12-14 18:37:44.881822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881838] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:29.193 [2024-12-14 18:37:44.881843] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:29.193 [2024-12-14 18:37:44.881846] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.881853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881906] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881914] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:29.193 [2024-12-14 18:37:44.881919] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:29.193 [2024-12-14 18:37:44.881922] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.881929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.881951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.881991] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:29.193 [2024-12-14 18:37:44.881996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:29.193 [2024-12-14 18:37:44.882001] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:29.193 [2024-12-14 18:37:44.882021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882140] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:29.193 [2024-12-14 18:37:44.882145] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:29.193 [2024-12-14 18:37:44.882148] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:29.193 [2024-12-14 18:37:44.882151] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:29.193 [2024-12-14 18:37:44.882154] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:29.193 [2024-12-14 18:37:44.882161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:29.193 [2024-12-14 18:37:44.882168] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:29.193 [2024-12-14 18:37:44.882172] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:29.193 [2024-12-14 18:37:44.882176] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.882182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882189] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:29.193 [2024-12-14 18:37:44.882193] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:29.193 [2024-12-14 18:37:44.882197] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.882203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882210] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:29.193 [2024-12-14 18:37:44.882214] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:29.193 [2024-12-14 18:37:44.882218] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:29.193 [2024-12-14 18:37:44.882224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:29.193 [2024-12-14 18:37:44.882231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:29.193 [2024-12-14 18:37:44.882303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:29.193 ===================================================== 00:17:29.193 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:29.193 ===================================================== 00:17:29.193 Controller Capabilities/Features 00:17:29.193 ================================ 00:17:29.193 Vendor ID: 4e58 00:17:29.193 Subsystem Vendor ID: 4e58 00:17:29.193 Serial Number: SPDK1 00:17:29.193 Model Number: SPDK bdev Controller 00:17:29.193 Firmware Version: 24.09.1 00:17:29.193 Recommended Arb Burst: 6 00:17:29.193 IEEE OUI Identifier: 8d 6b 50 00:17:29.193 Multi-path I/O 00:17:29.194 May have multiple subsystem ports: Yes 00:17:29.194 May have multiple controllers: Yes 00:17:29.194 Associated with SR-IOV VF: No 00:17:29.194 Max Data Transfer Size: 131072 00:17:29.194 Max Number of Namespaces: 32 00:17:29.194 Max Number of I/O Queues: 127 00:17:29.194 NVMe Specification Version (VS): 1.3 00:17:29.194 NVMe Specification Version (Identify): 1.3 00:17:29.194 Maximum Queue Entries: 256 00:17:29.194 Contiguous Queues Required: Yes 00:17:29.194 Arbitration Mechanisms Supported 00:17:29.194 Weighted Round Robin: Not Supported 00:17:29.194 Vendor Specific: Not Supported 00:17:29.194 Reset Timeout: 15000 ms 00:17:29.194 Doorbell Stride: 4 bytes 00:17:29.194 NVM Subsystem Reset: Not Supported 00:17:29.194 Command Sets Supported 00:17:29.194 NVM Command Set: Supported 00:17:29.194 Boot Partition: Not Supported 00:17:29.194 Memory Page Size Minimum: 4096 bytes 00:17:29.194 Memory Page Size Maximum: 4096 bytes 00:17:29.194 Persistent Memory Region: Not Supported 00:17:29.194 Optional Asynchronous Events Supported 00:17:29.194 Namespace Attribute Notices: Supported 00:17:29.194 Firmware Activation Notices: Not Supported 00:17:29.194 ANA Change Notices: Not Supported 00:17:29.194 PLE Aggregate Log Change Notices: Not Supported 00:17:29.194 LBA Status Info Alert Notices: Not Supported 00:17:29.194 EGE Aggregate Log Change Notices: Not Supported 00:17:29.194 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.194 Zone Descriptor Change Notices: Not Supported 00:17:29.194 Discovery Log Change Notices: Not Supported 00:17:29.194 Controller Attributes 00:17:29.194 128-bit Host Identifier: Supported 00:17:29.194 Non-Operational Permissive Mode: Not Supported 00:17:29.194 NVM Sets: Not Supported 00:17:29.194 Read Recovery Levels: Not Supported 00:17:29.194 Endurance Groups: Not Supported 00:17:29.194 Predictable Latency Mode: Not Supported 00:17:29.194 Traffic Based Keep ALive: Not Supported 00:17:29.194 Namespace Granularity: Not Supported 00:17:29.194 SQ Associations: Not Supported 00:17:29.194 UUID List: Not Supported 00:17:29.194 Multi-Domain Subsystem: Not Supported 00:17:29.194 Fixed Capacity Management: Not Supported 00:17:29.194 Variable Capacity Management: Not Supported 00:17:29.194 Delete Endurance Group: Not Supported 00:17:29.194 Delete NVM Set: Not Supported 00:17:29.194 Extended LBA Formats Supported: Not Supported 00:17:29.194 Flexible Data Placement Supported: Not Supported 00:17:29.194 00:17:29.194 Controller Memory Buffer Support 00:17:29.194 ================================ 00:17:29.194 Supported: No 00:17:29.194 00:17:29.194 Persistent Memory Region Support 00:17:29.194 ================================ 00:17:29.194 Supported: No 00:17:29.194 00:17:29.194 Admin Command Set Attributes 00:17:29.194 ============================ 00:17:29.194 Security Send/Receive: Not Supported 00:17:29.194 Format NVM: Not Supported 00:17:29.194 Firmware Activate/Download: Not Supported 00:17:29.194 Namespace Management: Not Supported 00:17:29.194 Device Self-Test: Not Supported 00:17:29.194 Directives: Not Supported 00:17:29.194 NVMe-MI: Not Supported 00:17:29.194 Virtualization Management: Not Supported 00:17:29.194 Doorbell Buffer Config: Not Supported 00:17:29.194 Get LBA Status Capability: Not Supported 00:17:29.194 Command & Feature Lockdown Capability: Not Supported 00:17:29.194 Abort Command Limit: 4 00:17:29.194 Async Event Request Limit: 4 00:17:29.194 Number of Firmware Slots: N/A 00:17:29.194 Firmware Slot 1 Read-Only: N/A 00:17:29.194 Firmware Activation Without Reset: N/A 00:17:29.194 Multiple Update Detection Support: N/A 00:17:29.194 Firmware Update Granularity: No Information Provided 00:17:29.194 Per-Namespace SMART Log: No 00:17:29.194 Asymmetric Namespace Access Log Page: Not Supported 00:17:29.194 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:29.194 Command Effects Log Page: Supported 00:17:29.194 Get Log Page Extended Data: Supported 00:17:29.194 Telemetry Log Pages: Not Supported 00:17:29.194 Persistent Event Log Pages: Not Supported 00:17:29.194 Supported Log Pages Log Page: May Support 00:17:29.194 Commands Supported & Effects Log Page: Not Supported 00:17:29.194 Feature Identifiers & Effects Log Page:May Support 00:17:29.194 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.194 Data Area 4 for Telemetry Log: Not Supported 00:17:29.194 Error Log Page Entries Supported: 128 00:17:29.194 Keep Alive: Supported 00:17:29.194 Keep Alive Granularity: 10000 ms 00:17:29.194 00:17:29.194 NVM Command Set Attributes 00:17:29.194 ========================== 00:17:29.194 Submission Queue Entry Size 00:17:29.194 Max: 64 00:17:29.194 Min: 64 00:17:29.194 Completion Queue Entry Size 00:17:29.194 Max: 16 00:17:29.194 Min: 16 00:17:29.194 Number of Namespaces: 32 00:17:29.194 Compare Command: Supported 00:17:29.194 Write Uncorrectable Command: Not Supported 00:17:29.194 Dataset Management Command: Supported 00:17:29.194 Write Zeroes Command: Supported 00:17:29.194 Set Features Save Field: Not Supported 00:17:29.194 Reservations: Not Supported 00:17:29.194 Timestamp: Not Supported 00:17:29.194 Copy: Supported 00:17:29.194 Volatile Write Cache: Present 00:17:29.194 Atomic Write Unit (Normal): 1 00:17:29.194 Atomic Write Unit (PFail): 1 00:17:29.194 Atomic Compare & Write Unit: 1 00:17:29.194 Fused Compare & Write: Supported 00:17:29.194 Scatter-Gather List 00:17:29.194 SGL Command Set: Supported (Dword aligned) 00:17:29.194 SGL Keyed: Not Supported 00:17:29.194 SGL Bit Bucket Descriptor: Not Supported 00:17:29.194 SGL Metadata Pointer: Not Supported 00:17:29.194 Oversized SGL: Not Supported 00:17:29.194 SGL Metadata Address: Not Supported 00:17:29.194 SGL Offset: Not Supported 00:17:29.194 Transport SGL Data Block: Not Supported 00:17:29.194 Replay Protected Memory Block: Not Supported 00:17:29.194 00:17:29.194 Firmware Slot Information 00:17:29.194 ========================= 00:17:29.194 Active slot: 1 00:17:29.194 Slot 1 Firmware Revision: 24.09.1 00:17:29.194 00:17:29.194 00:17:29.194 Commands Supported and Effects 00:17:29.194 ============================== 00:17:29.194 Admin Commands 00:17:29.194 -------------- 00:17:29.194 Get Log Page (02h): Supported 00:17:29.194 Identify (06h): Supported 00:17:29.194 Abort (08h): Supported 00:17:29.194 Set Features (09h): Supported 00:17:29.194 Get Features (0Ah): Supported 00:17:29.194 Asynchronous Event Request (0Ch): Supported 00:17:29.194 Keep Alive (18h): Supported 00:17:29.194 I/O Commands 00:17:29.194 ------------ 00:17:29.194 Flush (00h): Supported LBA-Change 00:17:29.194 Write (01h): Supported LBA-Change 00:17:29.194 Read (02h): Supported 00:17:29.194 Compare (05h): Supported 00:17:29.194 Write Zeroes (08h): Supported LBA-Change 00:17:29.194 Dataset Management (09h): Supported LBA-Change 00:17:29.194 Copy (19h): Supported LBA-Change 00:17:29.194 00:17:29.194 Error Log 00:17:29.194 ========= 00:17:29.194 00:17:29.194 Arbitration 00:17:29.194 =========== 00:17:29.194 Arbitration Burst: 1 00:17:29.194 00:17:29.194 Power Management 00:17:29.194 ================ 00:17:29.194 Number of Power States: 1 00:17:29.194 Current Power State: Power State #0 00:17:29.194 Power State #0: 00:17:29.194 Max Power: 0.00 W 00:17:29.194 Non-Operational State: Operational 00:17:29.194 Entry Latency: Not Reported 00:17:29.194 Exit Latency: Not Reported 00:17:29.194 Relative Read Throughput: 0 00:17:29.194 Relative Read Latency: 0 00:17:29.194 Relative Write Throughput: 0 00:17:29.194 Relative Write Latency: 0 00:17:29.194 Idle Power: Not Reported 00:17:29.194 Active Power: Not Reported 00:17:29.194 Non-Operational Permissive Mode: Not Supported 00:17:29.194 00:17:29.194 Health Information 00:17:29.194 ================== 00:17:29.194 Critical Warnings: 00:17:29.194 Available Spare Space: OK 00:17:29.194 Temperature: OK 00:17:29.194 Device Reliability: OK 00:17:29.194 Read Only: No 00:17:29.194 Volatile Memory Backup: OK 00:17:29.194 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:29.194 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:29.194 Available Spare: 0% 00:17:29.194 Availabl[2024-12-14 18:37:44.882468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:29.194 [2024-12-14 18:37:44.882486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:29.194 [2024-12-14 18:37:44.882540] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:29.194 [2024-12-14 18:37:44.882556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.194 [2024-12-14 18:37:44.882564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.194 [2024-12-14 18:37:44.882570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.194 [2024-12-14 18:37:44.882577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.194 [2024-12-14 18:37:44.884657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:29.194 [2024-12-14 18:37:44.884690] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:29.195 [2024-12-14 18:37:44.885257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:29.195 [2024-12-14 18:37:44.885454] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:29.195 [2024-12-14 18:37:44.885462] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:29.195 [2024-12-14 18:37:44.886284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:29.195 [2024-12-14 18:37:44.886320] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:29.195 [2024-12-14 18:37:44.886383] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:29.195 [2024-12-14 18:37:44.889659] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:29.195 e Spare Threshold: 0% 00:17:29.195 Life Percentage Used: 0% 00:17:29.195 Data Units Read: 0 00:17:29.195 Data Units Written: 0 00:17:29.195 Host Read Commands: 0 00:17:29.195 Host Write Commands: 0 00:17:29.195 Controller Busy Time: 0 minutes 00:17:29.195 Power Cycles: 0 00:17:29.195 Power On Hours: 0 hours 00:17:29.195 Unsafe Shutdowns: 0 00:17:29.195 Unrecoverable Media Errors: 0 00:17:29.195 Lifetime Error Log Entries: 0 00:17:29.195 Warning Temperature Time: 0 minutes 00:17:29.195 Critical Temperature Time: 0 minutes 00:17:29.195 00:17:29.195 Number of Queues 00:17:29.195 ================ 00:17:29.195 Number of I/O Submission Queues: 127 00:17:29.195 Number of I/O Completion Queues: 127 00:17:29.195 00:17:29.195 Active Namespaces 00:17:29.195 ================= 00:17:29.195 Namespace ID:1 00:17:29.195 Error Recovery Timeout: Unlimited 00:17:29.195 Command Set Identifier: NVM (00h) 00:17:29.195 Deallocate: Supported 00:17:29.195 Deallocated/Unwritten Error: Not Supported 00:17:29.195 Deallocated Read Value: Unknown 00:17:29.195 Deallocate in Write Zeroes: Not Supported 00:17:29.195 Deallocated Guard Field: 0xFFFF 00:17:29.195 Flush: Supported 00:17:29.195 Reservation: Supported 00:17:29.195 Namespace Sharing Capabilities: Multiple Controllers 00:17:29.195 Size (in LBAs): 131072 (0GiB) 00:17:29.195 Capacity (in LBAs): 131072 (0GiB) 00:17:29.195 Utilization (in LBAs): 131072 (0GiB) 00:17:29.195 NGUID: 21CCAAFCCBFF4416BA0BE30E786E7A73 00:17:29.195 UUID: 21ccaafc-cbff-4416-ba0b-e30e786e7a73 00:17:29.195 Thin Provisioning: Not Supported 00:17:29.195 Per-NS Atomic Units: Yes 00:17:29.195 Atomic Boundary Size (Normal): 0 00:17:29.195 Atomic Boundary Size (PFail): 0 00:17:29.195 Atomic Boundary Offset: 0 00:17:29.195 Maximum Single Source Range Length: 65535 00:17:29.195 Maximum Copy Length: 65535 00:17:29.195 Maximum Source Range Count: 1 00:17:29.195 NGUID/EUI64 Never Reused: No 00:17:29.195 Namespace Write Protected: No 00:17:29.195 Number of LBA Formats: 1 00:17:29.195 Current LBA Format: LBA Format #00 00:17:29.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:29.195 00:17:29.195 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:29.767 [2024-12-14 18:37:45.239017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:35.043 Initializing NVMe Controllers 00:17:35.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:35.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:35.043 Initialization complete. Launching workers. 00:17:35.043 ======================================================== 00:17:35.043 Latency(us) 00:17:35.043 Device Information : IOPS MiB/s Average min max 00:17:35.043 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37731.39 147.39 3392.12 1054.21 10403.15 00:17:35.043 ======================================================== 00:17:35.043 Total : 37731.39 147.39 3392.12 1054.21 10403.15 00:17:35.043 00:17:35.043 [2024-12-14 18:37:50.245076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:35.044 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:35.044 [2024-12-14 18:37:50.598606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.313 Initializing NVMe Controllers 00:17:40.313 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.313 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:40.313 Initialization complete. Launching workers. 00:17:40.313 ======================================================== 00:17:40.313 Latency(us) 00:17:40.313 Device Information : IOPS MiB/s Average min max 00:17:40.313 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8019.28 4952.00 16036.23 00:17:40.313 ======================================================== 00:17:40.313 Total : 15974.40 62.40 8019.28 4952.00 16036.23 00:17:40.313 00:17:40.313 [2024-12-14 18:37:55.621437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.313 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:40.313 [2024-12-14 18:37:55.890416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.582 [2024-12-14 18:38:00.941979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:45.582 Initializing NVMe Controllers 00:17:45.583 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:45.583 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:45.583 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:45.583 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:45.583 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:45.583 Initialization complete. Launching workers. 00:17:45.583 Starting thread on core 2 00:17:45.583 Starting thread on core 3 00:17:45.583 Starting thread on core 1 00:17:45.583 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:45.583 [2024-12-14 18:38:01.308393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:48.872 [2024-12-14 18:38:04.365142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:48.872 Initializing NVMe Controllers 00:17:48.872 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:48.872 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:48.872 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:48.872 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:48.872 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:48.872 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:48.872 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:48.872 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:48.872 Initialization complete. Launching workers. 00:17:48.872 Starting thread on core 1 with urgent priority queue 00:17:48.872 Starting thread on core 2 with urgent priority queue 00:17:48.872 Starting thread on core 3 with urgent priority queue 00:17:48.872 Starting thread on core 0 with urgent priority queue 00:17:48.872 SPDK bdev Controller (SPDK1 ) core 0: 7065.67 IO/s 14.15 secs/100000 ios 00:17:48.872 SPDK bdev Controller (SPDK1 ) core 1: 7153.67 IO/s 13.98 secs/100000 ios 00:17:48.872 SPDK bdev Controller (SPDK1 ) core 2: 6996.00 IO/s 14.29 secs/100000 ios 00:17:48.872 SPDK bdev Controller (SPDK1 ) core 3: 7036.67 IO/s 14.21 secs/100000 ios 00:17:48.872 ======================================================== 00:17:48.872 00:17:48.872 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:49.131 [2024-12-14 18:38:04.713519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:49.131 Initializing NVMe Controllers 00:17:49.131 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:49.131 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:49.131 Namespace ID: 1 size: 0GB 00:17:49.131 Initialization complete. 00:17:49.131 INFO: using host memory buffer for IO 00:17:49.131 Hello world! 00:17:49.131 [2024-12-14 18:38:04.745086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:49.131 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:49.389 [2024-12-14 18:38:05.091436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:50.765 Initializing NVMe Controllers 00:17:50.765 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.765 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.765 Initialization complete. Launching workers. 00:17:50.765 submit (in ns) avg, min, max = 7984.5, 3586.4, 4074670.0 00:17:50.765 complete (in ns) avg, min, max = 18784.4, 1904.5, 7020506.4 00:17:50.765 00:17:50.765 Submit histogram 00:17:50.765 ================ 00:17:50.765 Range in us Cumulative Count 00:17:50.765 3.578 - 3.593: 0.0131% ( 2) 00:17:50.765 3.593 - 3.607: 0.0393% ( 4) 00:17:50.765 3.607 - 3.622: 0.5495% ( 78) 00:17:50.765 3.622 - 3.636: 3.5261% ( 455) 00:17:50.765 3.636 - 3.651: 13.2082% ( 1480) 00:17:50.765 3.651 - 3.665: 22.3538% ( 1398) 00:17:50.765 3.665 - 3.680: 34.2732% ( 1822) 00:17:50.765 3.680 - 3.695: 45.6758% ( 1743) 00:17:50.765 3.695 - 3.709: 51.6224% ( 909) 00:17:50.765 3.709 - 3.724: 57.1961% ( 852) 00:17:50.765 3.724 - 3.753: 67.7221% ( 1609) 00:17:50.765 3.753 - 3.782: 75.1145% ( 1130) 00:17:50.765 3.782 - 3.811: 78.2873% ( 485) 00:17:50.765 3.811 - 3.840: 81.0349% ( 420) 00:17:50.765 3.840 - 3.869: 82.6835% ( 252) 00:17:50.765 3.869 - 3.898: 83.8087% ( 172) 00:17:50.765 3.898 - 3.927: 84.9536% ( 175) 00:17:50.765 3.927 - 3.956: 86.1311% ( 180) 00:17:50.765 3.956 - 3.985: 87.0666% ( 143) 00:17:50.765 3.985 - 4.015: 88.0413% ( 149) 00:17:50.765 4.015 - 4.044: 90.5469% ( 383) 00:17:50.765 4.044 - 4.073: 93.3076% ( 422) 00:17:50.765 4.073 - 4.102: 94.7337% ( 218) 00:17:50.765 4.102 - 4.131: 95.9833% ( 191) 00:17:50.765 4.131 - 4.160: 96.5262% ( 83) 00:17:50.765 4.160 - 4.189: 96.9842% ( 70) 00:17:50.765 4.189 - 4.218: 97.2655% ( 43) 00:17:50.765 4.218 - 4.247: 97.4552% ( 29) 00:17:50.765 4.247 - 4.276: 97.5141% ( 9) 00:17:50.765 4.276 - 4.305: 97.5860% ( 11) 00:17:50.765 4.305 - 4.335: 97.6449% ( 9) 00:17:50.765 4.335 - 4.364: 97.6842% ( 6) 00:17:50.765 4.364 - 4.393: 97.7169% ( 5) 00:17:50.765 4.393 - 4.422: 97.7365% ( 3) 00:17:50.765 4.422 - 4.451: 97.7496% ( 2) 00:17:50.765 4.451 - 4.480: 97.7692% ( 3) 00:17:50.765 4.480 - 4.509: 97.7757% ( 1) 00:17:50.765 4.509 - 4.538: 97.8215% ( 7) 00:17:50.765 4.538 - 4.567: 97.8870% ( 10) 00:17:50.765 4.567 - 4.596: 97.9916% ( 16) 00:17:50.765 4.596 - 4.625: 98.1028% ( 17) 00:17:50.765 4.625 - 4.655: 98.1552% ( 8) 00:17:50.765 4.655 - 4.684: 98.2271% ( 11) 00:17:50.765 4.684 - 4.713: 98.3056% ( 12) 00:17:50.765 4.713 - 4.742: 98.3514% ( 7) 00:17:50.765 4.742 - 4.771: 98.3580% ( 1) 00:17:50.765 4.771 - 4.800: 98.3645% ( 1) 00:17:50.765 4.800 - 4.829: 98.3907% ( 4) 00:17:50.765 4.829 - 4.858: 98.4038% ( 2) 00:17:50.765 4.858 - 4.887: 98.4103% ( 1) 00:17:50.765 4.916 - 4.945: 98.4234% ( 2) 00:17:50.765 4.945 - 4.975: 98.4299% ( 1) 00:17:50.765 4.975 - 5.004: 98.4430% ( 2) 00:17:50.765 5.004 - 5.033: 98.4561% ( 2) 00:17:50.765 5.062 - 5.091: 98.4626% ( 1) 00:17:50.765 5.091 - 5.120: 98.4757% ( 2) 00:17:50.765 5.207 - 5.236: 98.4823% ( 1) 00:17:50.765 5.236 - 5.265: 98.4954% ( 2) 00:17:50.765 5.353 - 5.382: 98.5019% ( 1) 00:17:50.765 5.585 - 5.615: 98.5084% ( 1) 00:17:50.765 5.935 - 5.964: 98.5215% ( 2) 00:17:50.765 6.138 - 6.167: 98.5281% ( 1) 00:17:50.765 6.458 - 6.487: 98.5346% ( 1) 00:17:50.765 6.575 - 6.604: 98.5411% ( 1) 00:17:50.765 6.778 - 6.807: 98.5477% ( 1) 00:17:50.765 6.807 - 6.836: 98.5542% ( 1) 00:17:50.765 6.924 - 6.953: 98.5608% ( 1) 00:17:50.765 7.040 - 7.069: 98.5673% ( 1) 00:17:50.765 7.069 - 7.098: 98.5739% ( 1) 00:17:50.765 7.156 - 7.185: 98.5804% ( 1) 00:17:50.765 7.244 - 7.273: 98.5869% ( 1) 00:17:50.765 7.389 - 7.418: 98.6000% ( 2) 00:17:50.765 7.564 - 7.622: 98.6131% ( 2) 00:17:50.765 7.738 - 7.796: 98.6197% ( 1) 00:17:50.765 7.796 - 7.855: 98.6327% ( 2) 00:17:50.765 7.855 - 7.913: 98.6458% ( 2) 00:17:50.765 7.971 - 8.029: 98.6589% ( 2) 00:17:50.765 8.087 - 8.145: 98.6654% ( 1) 00:17:50.765 8.145 - 8.204: 98.6916% ( 4) 00:17:50.765 8.204 - 8.262: 98.7112% ( 3) 00:17:50.765 8.262 - 8.320: 98.7243% ( 2) 00:17:50.765 8.320 - 8.378: 98.7439% ( 3) 00:17:50.765 8.378 - 8.436: 98.7570% ( 2) 00:17:50.765 8.436 - 8.495: 98.7636% ( 1) 00:17:50.765 8.553 - 8.611: 98.7767% ( 2) 00:17:50.765 8.611 - 8.669: 98.7832% ( 1) 00:17:50.765 8.669 - 8.727: 98.7963% ( 2) 00:17:50.765 8.844 - 8.902: 98.8094% ( 2) 00:17:50.765 8.902 - 8.960: 98.8159% ( 1) 00:17:50.765 9.018 - 9.076: 98.8225% ( 1) 00:17:50.765 9.135 - 9.193: 98.8290% ( 1) 00:17:50.765 9.193 - 9.251: 98.8486% ( 3) 00:17:50.765 9.309 - 9.367: 98.8617% ( 2) 00:17:50.765 9.367 - 9.425: 98.8944% ( 5) 00:17:50.765 9.542 - 9.600: 98.9010% ( 1) 00:17:50.765 9.600 - 9.658: 98.9140% ( 2) 00:17:50.765 9.658 - 9.716: 98.9337% ( 3) 00:17:50.765 9.716 - 9.775: 98.9467% ( 2) 00:17:50.765 9.833 - 9.891: 98.9598% ( 2) 00:17:50.765 9.891 - 9.949: 98.9729% ( 2) 00:17:50.765 9.949 - 10.007: 98.9795% ( 1) 00:17:50.765 10.065 - 10.124: 98.9860% ( 1) 00:17:50.765 10.182 - 10.240: 98.9925% ( 1) 00:17:50.765 10.240 - 10.298: 99.0253% ( 5) 00:17:50.765 10.356 - 10.415: 99.0449% ( 3) 00:17:50.765 10.531 - 10.589: 99.0580% ( 2) 00:17:50.765 10.589 - 10.647: 99.0710% ( 2) 00:17:50.765 10.647 - 10.705: 99.0776% ( 1) 00:17:50.765 10.705 - 10.764: 99.0841% ( 1) 00:17:50.765 10.764 - 10.822: 99.0907% ( 1) 00:17:50.765 10.822 - 10.880: 99.1038% ( 2) 00:17:50.765 10.880 - 10.938: 99.1168% ( 2) 00:17:50.765 10.938 - 10.996: 99.1234% ( 1) 00:17:50.765 10.996 - 11.055: 99.1365% ( 2) 00:17:50.765 11.055 - 11.113: 99.1626% ( 4) 00:17:50.765 11.171 - 11.229: 99.1692% ( 1) 00:17:50.765 11.229 - 11.287: 99.1757% ( 1) 00:17:50.765 11.462 - 11.520: 99.1823% ( 1) 00:17:50.765 11.636 - 11.695: 99.1888% ( 1) 00:17:50.765 12.044 - 12.102: 99.1953% ( 1) 00:17:50.765 12.102 - 12.160: 99.2019% ( 1) 00:17:50.765 12.276 - 12.335: 99.2084% ( 1) 00:17:50.765 12.335 - 12.393: 99.2215% ( 2) 00:17:50.765 12.393 - 12.451: 99.2281% ( 1) 00:17:50.765 12.451 - 12.509: 99.2346% ( 1) 00:17:50.765 12.509 - 12.567: 99.2477% ( 2) 00:17:50.765 12.684 - 12.742: 99.2542% ( 1) 00:17:50.765 12.800 - 12.858: 99.2608% ( 1) 00:17:50.765 12.858 - 12.916: 99.2673% ( 1) 00:17:50.765 13.207 - 13.265: 99.2738% ( 1) 00:17:50.765 13.556 - 13.615: 99.2804% ( 1) 00:17:50.765 13.789 - 13.847: 99.2869% ( 1) 00:17:50.765 13.847 - 13.905: 99.2935% ( 1) 00:17:50.765 13.905 - 13.964: 99.3000% ( 1) 00:17:50.765 14.080 - 14.138: 99.3066% ( 1) 00:17:50.765 15.593 - 15.709: 99.3131% ( 1) 00:17:50.765 16.756 - 16.873: 99.3196% ( 1) 00:17:50.765 17.804 - 17.920: 99.3327% ( 2) 00:17:50.765 17.920 - 18.036: 99.3851% ( 8) 00:17:50.765 18.036 - 18.153: 99.4570% ( 11) 00:17:50.765 18.153 - 18.269: 99.5094% ( 8) 00:17:50.765 18.269 - 18.385: 99.5486% ( 6) 00:17:50.765 18.385 - 18.502: 99.5617% ( 2) 00:17:50.765 18.502 - 18.618: 99.5748% ( 2) 00:17:50.765 18.618 - 18.735: 99.5879% ( 2) 00:17:50.765 18.735 - 18.851: 99.5944% ( 1) 00:17:50.765 18.967 - 19.084: 99.6009% ( 1) 00:17:50.765 19.316 - 19.433: 99.6140% ( 2) 00:17:50.765 19.549 - 19.665: 99.6533% ( 6) 00:17:50.765 19.665 - 19.782: 99.6794% ( 4) 00:17:50.765 19.782 - 19.898: 99.6925% ( 2) 00:17:50.765 19.898 - 20.015: 99.7252% ( 5) 00:17:50.765 20.015 - 20.131: 99.7776% ( 8) 00:17:50.765 20.131 - 20.247: 99.7907% ( 2) 00:17:50.765 20.247 - 20.364: 99.8037% ( 2) 00:17:50.765 20.364 - 20.480: 99.8168% ( 2) 00:17:50.766 20.480 - 20.596: 99.8299% ( 2) 00:17:50.766 21.062 - 21.178: 99.8365% ( 1) 00:17:50.766 25.251 - 25.367: 99.8430% ( 1) 00:17:50.766 25.600 - 25.716: 99.8495% ( 1) 00:17:50.766 26.531 - 26.647: 99.8561% ( 1) 00:17:50.766 26.647 - 26.764: 99.8626% ( 1) 00:17:50.766 26.764 - 26.880: 99.8692% ( 1) 00:17:50.766 27.229 - 27.345: 99.8757% ( 1) 00:17:50.766 39.564 - 39.796: 99.8822% ( 1) 00:17:50.766 40.262 - 40.495: 99.8888% ( 1) 00:17:50.766 43.753 - 43.985: 99.8953% ( 1) 00:17:50.766 3038.487 - 3053.382: 99.9084% ( 2) 00:17:50.766 3961.949 - 3991.738: 99.9215% ( 2) 00:17:50.766 3991.738 - 4021.527: 99.9542% ( 5) 00:17:50.766 4021.527 - 4051.316: 99.9935% ( 6) 00:17:50.766 4051.316 - 4081.105: 100.0000% ( 1) 00:17:50.766 00:17:50.766 Complete histogram 00:17:50.766 ================== 00:17:50.766 Range in us Cumulative Count 00:17:50.766 1.891 - 1.905: 0.0065% ( 1) 00:17:50.766 1.905 - 1.920: 0.1112% ( 16) 00:17:50.766 1.920 - 1.935: 0.4710% ( 55) 00:17:50.766 1.935 - 1.949: 18.5791% ( 2768) 00:17:50.766 1.949 - 1.964: 70.7248% ( 7971) 00:17:50.766 1.964 - 1.978: 73.7145% ( 457) 00:17:50.766 1.978 - 1.993: 74.2902% ( 88) 00:17:50.766 1.993 - 2.007: 79.4060% ( 782) 00:17:50.766 2.007 - 2.022: 91.5413% ( 1855) 00:17:50.766 2.022 - 2.036: 93.0001% ( 223) 00:17:50.766 2.036 - 2.051: 93.2422% ( 37) 00:17:50.766 2.051 - 2.065: 94.0926% ( 130) 00:17:50.766 2.065 - 2.080: 96.2318% ( 327) 00:17:50.766 2.080 - 2.095: 97.2131% ( 150) 00:17:50.766 2.095 - 2.109: 97.4029% ( 29) 00:17:50.766 2.109 - 2.124: 97.5141% ( 17) 00:17:50.766 2.124 - 2.138: 97.7561% ( 37) 00:17:50.766 2.138 - 2.153: 98.4365% ( 104) 00:17:50.766 2.153 - 2.167: 98.6262% ( 29) 00:17:50.766 2.167 - 2.182: 98.6458% ( 3) 00:17:50.766 2.182 - 2.196: 98.6589% ( 2) 00:17:50.766 2.196 - 2.211: 98.6916% ( 5) 00:17:50.766 2.211 - 2.225: 98.7047% ( 2) 00:17:50.766 2.225 - 2.240: 98.7112% ( 1) 00:17:50.766 2.240 - 2.255: 98.7243% ( 2) 00:17:50.766 2.255 - 2.269: 98.7309% ( 1) 00:17:50.766 2.269 - 2.284: 98.7505% ( 3) 00:17:50.766 2.284 - 2.298: 98.7636% ( 2) 00:17:50.766 2.313 - 2.327: 98.7701% ( 1) 00:17:50.766 2.342 - 2.356: 98.7963% ( 4) 00:17:50.766 2.356 - 2.371: 98.8028% ( 1) 00:17:50.766 2.400 - 2.415: 98.8094% ( 1) 00:17:50.766 2.415 - 2.429: 98.8159% ( 1) 00:17:50.766 2.516 - 2.531: 98.8225% ( 1) 00:17:50.766 2.851 - 2.865: 98.8290% ( 1) 00:17:50.766 3.287 - 3.302: 98.8355% ( 1) 00:17:50.766 3.433 - 3.447: 98.8421% ( 1) 00:17:50.766 3.491 - 3.505: 98.8486% ( 1) 00:17:50.766 3.505 - 3.520: 98.8552% ( 1) 00:17:50.766 3.520 - 3.535: 98.8617% ( 1) 00:17:50.766 3.651 - 3.665: 98.8682% ( 1) 00:17:50.766 3.665 - 3.680: 98.8879% ( 3) 00:17:50.766 3.680 - 3.695: 98.9010% ( 2) 00:17:50.766 3.695 - 3.709: 98.9075% ( 1) 00:17:50.766 3.709 - 3.724: 98.9140% ( 1) 00:17:50.766 3.724 - 3.753: 98.9271% ( 2) 00:17:50.766 3.753 - 3.782: 98.9533% ( 4) 00:17:50.766 3.782 - 3.811: 98.9729% ( 3) 00:17:50.766 3.811 - 3.840: 98.9860% ( 2) 00:17:50.766 3.840 - 3.869: 98.9991% ( 2) 00:17:50.766 3.869 - 3.898: 99.0056% ( 1) 00:17:50.766 3.898 - 3.927: 99.0122% ( 1) 00:17:50.766 3.927 - 3.956: 99.0514% ( 6) 00:17:50.766 3.956 - 3.985: 99.0645% ( 2) 00:17:50.766 4.015 - 4.044: 99.0776% ( 2) 00:17:50.766 4.131 - 4.160: 99.0907% ( 2) 00:17:50.766 4.160 - 4.189: 99.0972% ( 1) 00:17:50.766 4.335 - 4.364: 99.1038% ( 1) 00:17:50.766 4.567 - 4.596: 99.1103% ( 1) 00:17:50.766 4.625 - 4.655: 99.1168% ( 1) 00:17:50.766 4.655 - 4.684: 99.1234% ( 1) 00:17:50.766 4.684 - 4.713: 99.1299% ( 1) 00:17:50.766 4.800 - 4.829: 99.1365% ( 1) 00:17:50.766 4.916 - 4.945: 99.1430% ( 1) 00:17:50.766 5.964 - 5.993: 99.1495% ( 1) 00:17:50.766 6.051 - 6.080: 99.1561% ( 1) 00:17:50.766 6.109 - 6.138: 99.1626% ( 1) 00:17:50.766 6.225 - 6.255: 99.1692% ( 1) 00:17:50.766 6.255 - 6.284: 99.1757% ( 1) 00:17:50.766 7.011 - 7.040: 99.1823% ( 1) 00:17:50.766 7.185 - 7.215: 99.1888% ( 1) 00:17:50.766 7.389 - 7.418: 99.1953% ( 1) 00:17:50.766 7.505 - 7.564: 99.2019% ( 1) 00:17:50.766 7.738 - 7.796: 99.2084% ( 1) 00:17:50.766 7.796 - 7.855: 99.2150% ( 1) 00:17:50.766 8.145 - 8.204: 99.2215% ( 1) 00:17:50.766 8.320 - 8.378: 99.2346% ( 2) 00:17:50.766 8.495 - 8.553: 99.2411% ( 1) 00:17:50.766 8.553 - 8.611: 99.2477% ( 1) 00:17:50.766 8.611 - 8.669: 9[2024-12-14 18:38:06.107096] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:50.766 9.2608% ( 2) 00:17:50.766 8.727 - 8.785: 99.2673% ( 1) 00:17:50.766 8.785 - 8.844: 99.2738% ( 1) 00:17:50.766 9.193 - 9.251: 99.2869% ( 2) 00:17:50.766 9.309 - 9.367: 99.2935% ( 1) 00:17:50.766 9.367 - 9.425: 99.3000% ( 1) 00:17:50.766 11.636 - 11.695: 99.3066% ( 1) 00:17:50.766 12.218 - 12.276: 99.3131% ( 1) 00:17:50.766 13.789 - 13.847: 99.3196% ( 1) 00:17:50.766 14.138 - 14.196: 99.3262% ( 1) 00:17:50.766 16.175 - 16.291: 99.3393% ( 2) 00:17:50.766 16.291 - 16.407: 99.3851% ( 7) 00:17:50.766 16.407 - 16.524: 99.4178% ( 5) 00:17:50.766 16.524 - 16.640: 99.4309% ( 2) 00:17:50.766 16.640 - 16.756: 99.4374% ( 1) 00:17:50.766 17.105 - 17.222: 99.4439% ( 1) 00:17:50.766 17.571 - 17.687: 99.4505% ( 1) 00:17:50.766 17.804 - 17.920: 99.4570% ( 1) 00:17:50.766 17.920 - 18.036: 99.4636% ( 1) 00:17:50.766 18.036 - 18.153: 99.4897% ( 4) 00:17:50.766 18.153 - 18.269: 99.4963% ( 1) 00:17:50.766 18.269 - 18.385: 99.5224% ( 4) 00:17:50.766 18.502 - 18.618: 99.5421% ( 3) 00:17:50.766 18.735 - 18.851: 99.5551% ( 2) 00:17:50.766 19.665 - 19.782: 99.5617% ( 1) 00:17:50.766 23.622 - 23.738: 99.5682% ( 1) 00:17:50.766 32.349 - 32.582: 99.5748% ( 1) 00:17:50.766 33.280 - 33.513: 99.5813% ( 1) 00:17:50.766 3023.593 - 3038.487: 99.5879% ( 1) 00:17:50.766 3038.487 - 3053.382: 99.6009% ( 2) 00:17:50.766 3068.276 - 3083.171: 99.6075% ( 1) 00:17:50.766 3961.949 - 3991.738: 99.6402% ( 5) 00:17:50.766 3991.738 - 4021.527: 99.8495% ( 32) 00:17:50.766 4021.527 - 4051.316: 99.9869% ( 21) 00:17:50.766 4051.316 - 4081.105: 99.9935% ( 1) 00:17:50.766 7000.436 - 7030.225: 100.0000% ( 1) 00:17:50.766 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:50.766 [ 00:17:50.766 { 00:17:50.766 "allow_any_host": true, 00:17:50.766 "hosts": [], 00:17:50.766 "listen_addresses": [], 00:17:50.766 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:50.766 "subtype": "Discovery" 00:17:50.766 }, 00:17:50.766 { 00:17:50.766 "allow_any_host": true, 00:17:50.766 "hosts": [], 00:17:50.766 "listen_addresses": [ 00:17:50.766 { 00:17:50.766 "adrfam": "IPv4", 00:17:50.766 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:50.766 "trsvcid": "0", 00:17:50.766 "trtype": "VFIOUSER" 00:17:50.766 } 00:17:50.766 ], 00:17:50.766 "max_cntlid": 65519, 00:17:50.766 "max_namespaces": 32, 00:17:50.766 "min_cntlid": 1, 00:17:50.766 "model_number": "SPDK bdev Controller", 00:17:50.766 "namespaces": [ 00:17:50.766 { 00:17:50.766 "bdev_name": "Malloc1", 00:17:50.766 "name": "Malloc1", 00:17:50.766 "nguid": "21CCAAFCCBFF4416BA0BE30E786E7A73", 00:17:50.766 "nsid": 1, 00:17:50.766 "uuid": "21ccaafc-cbff-4416-ba0b-e30e786e7a73" 00:17:50.766 } 00:17:50.766 ], 00:17:50.766 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:50.766 "serial_number": "SPDK1", 00:17:50.766 "subtype": "NVMe" 00:17:50.766 }, 00:17:50.766 { 00:17:50.766 "allow_any_host": true, 00:17:50.766 "hosts": [], 00:17:50.766 "listen_addresses": [ 00:17:50.766 { 00:17:50.766 "adrfam": "IPv4", 00:17:50.766 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:50.766 "trsvcid": "0", 00:17:50.766 "trtype": "VFIOUSER" 00:17:50.766 } 00:17:50.766 ], 00:17:50.766 "max_cntlid": 65519, 00:17:50.766 "max_namespaces": 32, 00:17:50.766 "min_cntlid": 1, 00:17:50.766 "model_number": "SPDK bdev Controller", 00:17:50.766 "namespaces": [ 00:17:50.766 { 00:17:50.766 "bdev_name": "Malloc2", 00:17:50.766 "name": "Malloc2", 00:17:50.766 "nguid": "49B28C4CDFB9450F88AF0A46B372646E", 00:17:50.766 "nsid": 1, 00:17:50.766 "uuid": "49b28c4c-dfb9-450f-88af-0a46b372646e" 00:17:50.766 } 00:17:50.766 ], 00:17:50.766 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:50.766 "serial_number": "SPDK2", 00:17:50.766 "subtype": "NVMe" 00:17:50.766 } 00:17:50.766 ] 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:50.766 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=93257 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:17:50.767 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:51.025 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:51.026 [2024-12-14 18:38:06.643954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:51.026 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:51.284 Malloc3 00:17:51.284 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:51.851 [2024-12-14 18:38:07.307539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:51.851 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:51.851 Asynchronous Event Request test 00:17:51.851 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:51.851 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:51.851 Registering asynchronous event callbacks... 00:17:51.851 Starting namespace attribute notice tests for all controllers... 00:17:51.851 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:51.851 aer_cb - Changed Namespace 00:17:51.851 Cleaning up... 00:17:52.111 [ 00:17:52.111 { 00:17:52.111 "allow_any_host": true, 00:17:52.111 "hosts": [], 00:17:52.111 "listen_addresses": [], 00:17:52.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.111 "subtype": "Discovery" 00:17:52.111 }, 00:17:52.111 { 00:17:52.111 "allow_any_host": true, 00:17:52.111 "hosts": [], 00:17:52.111 "listen_addresses": [ 00:17:52.111 { 00:17:52.111 "adrfam": "IPv4", 00:17:52.111 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:52.111 "trsvcid": "0", 00:17:52.111 "trtype": "VFIOUSER" 00:17:52.111 } 00:17:52.111 ], 00:17:52.111 "max_cntlid": 65519, 00:17:52.111 "max_namespaces": 32, 00:17:52.111 "min_cntlid": 1, 00:17:52.111 "model_number": "SPDK bdev Controller", 00:17:52.111 "namespaces": [ 00:17:52.111 { 00:17:52.111 "bdev_name": "Malloc1", 00:17:52.111 "name": "Malloc1", 00:17:52.111 "nguid": "21CCAAFCCBFF4416BA0BE30E786E7A73", 00:17:52.111 "nsid": 1, 00:17:52.111 "uuid": "21ccaafc-cbff-4416-ba0b-e30e786e7a73" 00:17:52.111 }, 00:17:52.111 { 00:17:52.111 "bdev_name": "Malloc3", 00:17:52.111 "name": "Malloc3", 00:17:52.111 "nguid": "40D159A46FAA482E9A51FE3BAAF5BF56", 00:17:52.111 "nsid": 2, 00:17:52.111 "uuid": "40d159a4-6faa-482e-9a51-fe3baaf5bf56" 00:17:52.111 } 00:17:52.111 ], 00:17:52.111 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:52.111 "serial_number": "SPDK1", 00:17:52.111 "subtype": "NVMe" 00:17:52.111 }, 00:17:52.111 { 00:17:52.111 "allow_any_host": true, 00:17:52.111 "hosts": [], 00:17:52.111 "listen_addresses": [ 00:17:52.111 { 00:17:52.111 "adrfam": "IPv4", 00:17:52.111 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:52.111 "trsvcid": "0", 00:17:52.111 "trtype": "VFIOUSER" 00:17:52.111 } 00:17:52.111 ], 00:17:52.111 "max_cntlid": 65519, 00:17:52.111 "max_namespaces": 32, 00:17:52.111 "min_cntlid": 1, 00:17:52.111 "model_number": "SPDK bdev Controller", 00:17:52.111 "namespaces": [ 00:17:52.111 { 00:17:52.111 "bdev_name": "Malloc2", 00:17:52.111 "name": "Malloc2", 00:17:52.111 "nguid": "49B28C4CDFB9450F88AF0A46B372646E", 00:17:52.111 "nsid": 1, 00:17:52.111 "uuid": "49b28c4c-dfb9-450f-88af-0a46b372646e" 00:17:52.111 } 00:17:52.111 ], 00:17:52.111 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:52.111 "serial_number": "SPDK2", 00:17:52.111 "subtype": "NVMe" 00:17:52.111 } 00:17:52.111 ] 00:17:52.111 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 93257 00:17:52.111 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.111 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:52.111 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:52.111 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:52.111 [2024-12-14 18:38:07.660589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:52.111 [2024-12-14 18:38:07.660651] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93294 ] 00:17:52.111 [2024-12-14 18:38:07.796345] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:52.111 [2024-12-14 18:38:07.804920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:52.111 [2024-12-14 18:38:07.804962] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fba84d2a000 00:17:52.111 [2024-12-14 18:38:07.805917] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.806929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.807931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.808936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.809936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.810941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.811942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.812948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.111 [2024-12-14 18:38:07.813954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:52.111 [2024-12-14 18:38:07.813977] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fba838b1000 00:17:52.111 [2024-12-14 18:38:07.815085] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:52.112 [2024-12-14 18:38:07.827493] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:52.112 [2024-12-14 18:38:07.827533] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:52.112 [2024-12-14 18:38:07.832636] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:52.112 [2024-12-14 18:38:07.832686] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:52.112 [2024-12-14 18:38:07.832764] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:52.112 [2024-12-14 18:38:07.832789] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:52.112 [2024-12-14 18:38:07.832795] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:52.112 [2024-12-14 18:38:07.833660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:52.112 [2024-12-14 18:38:07.833682] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:52.112 [2024-12-14 18:38:07.833692] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:52.112 [2024-12-14 18:38:07.834686] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:52.112 [2024-12-14 18:38:07.834719] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:52.112 [2024-12-14 18:38:07.834730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.835666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:52.112 [2024-12-14 18:38:07.835688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.836676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:52.112 [2024-12-14 18:38:07.836697] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:52.112 [2024-12-14 18:38:07.836704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.836712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.836818] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:52.112 [2024-12-14 18:38:07.836824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.836829] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:52.112 [2024-12-14 18:38:07.837676] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:52.112 [2024-12-14 18:38:07.838703] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:52.112 [2024-12-14 18:38:07.839704] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:52.112 [2024-12-14 18:38:07.840697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:52.112 [2024-12-14 18:38:07.840767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:52.112 [2024-12-14 18:38:07.841709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:52.112 [2024-12-14 18:38:07.841732] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:52.112 [2024-12-14 18:38:07.841739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.841758] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:52.112 [2024-12-14 18:38:07.841773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.841790] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:52.112 [2024-12-14 18:38:07.841796] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:52.112 [2024-12-14 18:38:07.841800] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.112 [2024-12-14 18:38:07.841813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:52.112 [2024-12-14 18:38:07.850647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:52.112 [2024-12-14 18:38:07.850679] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:52.112 [2024-12-14 18:38:07.850685] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:52.112 [2024-12-14 18:38:07.850690] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:52.112 [2024-12-14 18:38:07.850695] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:52.112 [2024-12-14 18:38:07.850700] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:52.112 [2024-12-14 18:38:07.850705] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:52.112 [2024-12-14 18:38:07.850710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.850720] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.850732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:52.112 [2024-12-14 18:38:07.858608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:52.112 [2024-12-14 18:38:07.858634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.112 [2024-12-14 18:38:07.858674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.112 [2024-12-14 18:38:07.858682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.112 [2024-12-14 18:38:07.858691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.112 [2024-12-14 18:38:07.858696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.858712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:52.112 [2024-12-14 18:38:07.858723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:52.372 [2024-12-14 18:38:07.866612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.866630] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:52.373 [2024-12-14 18:38:07.866681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.866690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.866702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.866713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.874611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.874706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.874719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.874728] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:52.373 [2024-12-14 18:38:07.874733] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:52.373 [2024-12-14 18:38:07.874736] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.874743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.881642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.881665] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:52.373 [2024-12-14 18:38:07.881679] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.881688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.881696] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:52.373 [2024-12-14 18:38:07.881701] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:52.373 [2024-12-14 18:38:07.881704] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.881711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.889610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.889640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.889652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.889661] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:52.373 [2024-12-14 18:38:07.889665] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:52.373 [2024-12-14 18:38:07.889669] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.889676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.897641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.897664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897706] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:52.373 [2024-12-14 18:38:07.897711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:52.373 [2024-12-14 18:38:07.897715] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:52.373 [2024-12-14 18:38:07.897734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.907654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.907682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.915638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.915664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.923643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.923668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.931640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:52.373 [2024-12-14 18:38:07.931672] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:52.373 [2024-12-14 18:38:07.931679] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:52.373 [2024-12-14 18:38:07.931683] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:52.373 [2024-12-14 18:38:07.931686] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:52.373 [2024-12-14 18:38:07.931689] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:52.373 [2024-12-14 18:38:07.931696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:52.373 [2024-12-14 18:38:07.931703] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:52.373 [2024-12-14 18:38:07.931708] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:52.373 [2024-12-14 18:38:07.931711] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.931717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.931724] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:52.373 [2024-12-14 18:38:07.931729] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:52.373 [2024-12-14 18:38:07.931732] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.931738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:52.373 [2024-12-14 18:38:07.931746] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:52.373 [2024-12-14 18:38:07.931750] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:52.373 [2024-12-14 18:38:07.931753] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:52.373 [2024-12-14 18:38:07.931759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:52.373 ===================================================== 00:17:52.373 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:52.373 ===================================================== 00:17:52.373 Controller Capabilities/Features 00:17:52.373 ================================ 00:17:52.373 Vendor ID: 4e58 00:17:52.373 Subsystem Vendor ID: 4e58 00:17:52.373 Serial Number: SPDK2 00:17:52.373 Model Number: SPDK bdev Controller 00:17:52.373 Firmware Version: 24.09.1 00:17:52.373 Recommended Arb Burst: 6 00:17:52.373 IEEE OUI Identifier: 8d 6b 50 00:17:52.373 Multi-path I/O 00:17:52.373 May have multiple subsystem ports: Yes 00:17:52.373 May have multiple controllers: Yes 00:17:52.373 Associated with SR-IOV VF: No 00:17:52.373 Max Data Transfer Size: 131072 00:17:52.373 Max Number of Namespaces: 32 00:17:52.373 Max Number of I/O Queues: 127 00:17:52.373 NVMe Specification Version (VS): 1.3 00:17:52.373 NVMe Specification Version (Identify): 1.3 00:17:52.373 Maximum Queue Entries: 256 00:17:52.373 Contiguous Queues Required: Yes 00:17:52.373 Arbitration Mechanisms Supported 00:17:52.373 Weighted Round Robin: Not Supported 00:17:52.373 Vendor Specific: Not Supported 00:17:52.373 Reset Timeout: 15000 ms 00:17:52.373 Doorbell Stride: 4 bytes 00:17:52.373 NVM Subsystem Reset: Not Supported 00:17:52.373 Command Sets Supported 00:17:52.373 NVM Command Set: Supported 00:17:52.373 Boot Partition: Not Supported 00:17:52.373 Memory Page Size Minimum: 4096 bytes 00:17:52.373 Memory Page Size Maximum: 4096 bytes 00:17:52.373 Persistent Memory Region: Not Supported 00:17:52.373 Optional Asynchronous Events Supported 00:17:52.373 Namespace Attribute Notices: Supported 00:17:52.373 Firmware Activation Notices: Not Supported 00:17:52.373 ANA Change Notices: Not Supported 00:17:52.373 PLE Aggregate Log Change Notices: Not Supported 00:17:52.373 LBA Status Info Alert Notices: Not Supported 00:17:52.373 EGE Aggregate Log Change Notices: Not Supported 00:17:52.373 Normal NVM Subsystem Shutdown event: Not Supported 00:17:52.373 Zone Descriptor Change Notices: Not Supported 00:17:52.373 Discovery Log Change Notices: Not Supported 00:17:52.373 Controller Attributes 00:17:52.373 128-bit Host Identifier: Supported 00:17:52.373 Non-Operational Permissive Mode: Not Supported 00:17:52.374 NVM Sets: Not Supported 00:17:52.374 Read Recovery Levels: Not Supported 00:17:52.374 Endurance Groups: Not Supported 00:17:52.374 Predictable Latency Mode: Not Supported 00:17:52.374 Traffic Based Keep ALive: Not Supported 00:17:52.374 Namespace Granularity: Not Supported 00:17:52.374 SQ Associations: Not Supported 00:17:52.374 UUID List: Not Supported 00:17:52.374 Multi-Domain Subsystem: Not Supported 00:17:52.374 Fixed Capacity Management: Not Supported 00:17:52.374 Variable Capacity Management: Not Supported 00:17:52.374 Delete Endurance Group: Not Supported 00:17:52.374 Delete NVM Set: Not Supported 00:17:52.374 Extended LBA Formats Supported: Not Supported 00:17:52.374 Flexible Data Placement Supported: Not Supported 00:17:52.374 00:17:52.374 Controller Memory Buffer Support 00:17:52.374 ================================ 00:17:52.374 Supported: No 00:17:52.374 00:17:52.374 Persistent Memory Region Support 00:17:52.374 ================================ 00:17:52.374 Supported: No 00:17:52.374 00:17:52.374 Admin Command Set Attributes 00:17:52.374 ============================ 00:17:52.374 Security Send/Receive: Not Supported 00:17:52.374 Format NVM: Not Supported 00:17:52.374 Firmware Activate/Download: Not Supported 00:17:52.374 Namespace Management: Not Supported 00:17:52.374 Device Self-Test: Not Supported 00:17:52.374 Directives: Not Supported 00:17:52.374 NVMe-MI: Not Supported 00:17:52.374 Virtualization Management: Not Supported 00:17:52.374 Doorbell Buffer Config: Not Supported 00:17:52.374 Get LBA Status Capability: Not Supported 00:17:52.374 Command & Feature Lockdown Capability: Not Supported 00:17:52.374 Abort Command Limit: 4 00:17:52.374 Async Event Request Limit: 4 00:17:52.374 Number of Firmware Slots: N/A 00:17:52.374 Firmware Slot 1 Read-Only: N/A 00:17:52.374 Firmware Activation Wit[2024-12-14 18:38:07.939647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.939676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.939693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.939701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:52.374 hout Reset: N/A 00:17:52.374 Multiple Update Detection Support: N/A 00:17:52.374 Firmware Update Granularity: No Information Provided 00:17:52.374 Per-Namespace SMART Log: No 00:17:52.374 Asymmetric Namespace Access Log Page: Not Supported 00:17:52.374 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:52.374 Command Effects Log Page: Supported 00:17:52.374 Get Log Page Extended Data: Supported 00:17:52.374 Telemetry Log Pages: Not Supported 00:17:52.374 Persistent Event Log Pages: Not Supported 00:17:52.374 Supported Log Pages Log Page: May Support 00:17:52.374 Commands Supported & Effects Log Page: Not Supported 00:17:52.374 Feature Identifiers & Effects Log Page:May Support 00:17:52.374 NVMe-MI Commands & Effects Log Page: May Support 00:17:52.374 Data Area 4 for Telemetry Log: Not Supported 00:17:52.374 Error Log Page Entries Supported: 128 00:17:52.374 Keep Alive: Supported 00:17:52.374 Keep Alive Granularity: 10000 ms 00:17:52.374 00:17:52.374 NVM Command Set Attributes 00:17:52.374 ========================== 00:17:52.374 Submission Queue Entry Size 00:17:52.374 Max: 64 00:17:52.374 Min: 64 00:17:52.374 Completion Queue Entry Size 00:17:52.374 Max: 16 00:17:52.374 Min: 16 00:17:52.374 Number of Namespaces: 32 00:17:52.374 Compare Command: Supported 00:17:52.374 Write Uncorrectable Command: Not Supported 00:17:52.374 Dataset Management Command: Supported 00:17:52.374 Write Zeroes Command: Supported 00:17:52.374 Set Features Save Field: Not Supported 00:17:52.374 Reservations: Not Supported 00:17:52.374 Timestamp: Not Supported 00:17:52.374 Copy: Supported 00:17:52.374 Volatile Write Cache: Present 00:17:52.374 Atomic Write Unit (Normal): 1 00:17:52.374 Atomic Write Unit (PFail): 1 00:17:52.374 Atomic Compare & Write Unit: 1 00:17:52.374 Fused Compare & Write: Supported 00:17:52.374 Scatter-Gather List 00:17:52.374 SGL Command Set: Supported (Dword aligned) 00:17:52.374 SGL Keyed: Not Supported 00:17:52.374 SGL Bit Bucket Descriptor: Not Supported 00:17:52.374 SGL Metadata Pointer: Not Supported 00:17:52.374 Oversized SGL: Not Supported 00:17:52.374 SGL Metadata Address: Not Supported 00:17:52.374 SGL Offset: Not Supported 00:17:52.374 Transport SGL Data Block: Not Supported 00:17:52.374 Replay Protected Memory Block: Not Supported 00:17:52.374 00:17:52.374 Firmware Slot Information 00:17:52.374 ========================= 00:17:52.374 Active slot: 1 00:17:52.374 Slot 1 Firmware Revision: 24.09.1 00:17:52.374 00:17:52.374 00:17:52.374 Commands Supported and Effects 00:17:52.374 ============================== 00:17:52.374 Admin Commands 00:17:52.374 -------------- 00:17:52.374 Get Log Page (02h): Supported 00:17:52.374 Identify (06h): Supported 00:17:52.374 Abort (08h): Supported 00:17:52.374 Set Features (09h): Supported 00:17:52.374 Get Features (0Ah): Supported 00:17:52.374 Asynchronous Event Request (0Ch): Supported 00:17:52.374 Keep Alive (18h): Supported 00:17:52.374 I/O Commands 00:17:52.374 ------------ 00:17:52.374 Flush (00h): Supported LBA-Change 00:17:52.374 Write (01h): Supported LBA-Change 00:17:52.374 Read (02h): Supported 00:17:52.374 Compare (05h): Supported 00:17:52.374 Write Zeroes (08h): Supported LBA-Change 00:17:52.374 Dataset Management (09h): Supported LBA-Change 00:17:52.374 Copy (19h): Supported LBA-Change 00:17:52.374 00:17:52.374 Error Log 00:17:52.374 ========= 00:17:52.374 00:17:52.374 Arbitration 00:17:52.374 =========== 00:17:52.374 Arbitration Burst: 1 00:17:52.374 00:17:52.374 Power Management 00:17:52.374 ================ 00:17:52.374 Number of Power States: 1 00:17:52.374 Current Power State: Power State #0 00:17:52.374 Power State #0: 00:17:52.374 Max Power: 0.00 W 00:17:52.374 Non-Operational State: Operational 00:17:52.374 Entry Latency: Not Reported 00:17:52.374 Exit Latency: Not Reported 00:17:52.374 Relative Read Throughput: 0 00:17:52.374 Relative Read Latency: 0 00:17:52.374 Relative Write Throughput: 0 00:17:52.374 Relative Write Latency: 0 00:17:52.374 Idle Power: Not Reported 00:17:52.374 Active Power: Not Reported 00:17:52.374 Non-Operational Permissive Mode: Not Supported 00:17:52.374 00:17:52.374 Health Information 00:17:52.374 ================== 00:17:52.374 Critical Warnings: 00:17:52.374 Available Spare Space: OK 00:17:52.374 Temperature: OK 00:17:52.374 Device Reliability: OK 00:17:52.374 Read Only: No 00:17:52.374 Volatile Memory Backup: OK 00:17:52.374 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:52.374 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:52.374 Available Spare: 0% 00:17:52.374 Availabl[2024-12-14 18:38:07.939853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:52.374 [2024-12-14 18:38:07.947643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.947694] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:52.374 [2024-12-14 18:38:07.947706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.947713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.947720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.947726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.374 [2024-12-14 18:38:07.947788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:52.374 [2024-12-14 18:38:07.947803] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:52.374 [2024-12-14 18:38:07.948785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:52.374 [2024-12-14 18:38:07.948881] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:52.374 [2024-12-14 18:38:07.948893] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:52.374 [2024-12-14 18:38:07.949794] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:52.374 [2024-12-14 18:38:07.949821] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:52.374 [2024-12-14 18:38:07.949886] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:52.374 [2024-12-14 18:38:07.951066] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:52.374 e Spare Threshold: 0% 00:17:52.374 Life Percentage Used: 0% 00:17:52.374 Data Units Read: 0 00:17:52.374 Data Units Written: 0 00:17:52.374 Host Read Commands: 0 00:17:52.374 Host Write Commands: 0 00:17:52.374 Controller Busy Time: 0 minutes 00:17:52.374 Power Cycles: 0 00:17:52.374 Power On Hours: 0 hours 00:17:52.374 Unsafe Shutdowns: 0 00:17:52.374 Unrecoverable Media Errors: 0 00:17:52.374 Lifetime Error Log Entries: 0 00:17:52.374 Warning Temperature Time: 0 minutes 00:17:52.374 Critical Temperature Time: 0 minutes 00:17:52.374 00:17:52.375 Number of Queues 00:17:52.375 ================ 00:17:52.375 Number of I/O Submission Queues: 127 00:17:52.375 Number of I/O Completion Queues: 127 00:17:52.375 00:17:52.375 Active Namespaces 00:17:52.375 ================= 00:17:52.375 Namespace ID:1 00:17:52.375 Error Recovery Timeout: Unlimited 00:17:52.375 Command Set Identifier: NVM (00h) 00:17:52.375 Deallocate: Supported 00:17:52.375 Deallocated/Unwritten Error: Not Supported 00:17:52.375 Deallocated Read Value: Unknown 00:17:52.375 Deallocate in Write Zeroes: Not Supported 00:17:52.375 Deallocated Guard Field: 0xFFFF 00:17:52.375 Flush: Supported 00:17:52.375 Reservation: Supported 00:17:52.375 Namespace Sharing Capabilities: Multiple Controllers 00:17:52.375 Size (in LBAs): 131072 (0GiB) 00:17:52.375 Capacity (in LBAs): 131072 (0GiB) 00:17:52.375 Utilization (in LBAs): 131072 (0GiB) 00:17:52.375 NGUID: 49B28C4CDFB9450F88AF0A46B372646E 00:17:52.375 UUID: 49b28c4c-dfb9-450f-88af-0a46b372646e 00:17:52.375 Thin Provisioning: Not Supported 00:17:52.375 Per-NS Atomic Units: Yes 00:17:52.375 Atomic Boundary Size (Normal): 0 00:17:52.375 Atomic Boundary Size (PFail): 0 00:17:52.375 Atomic Boundary Offset: 0 00:17:52.375 Maximum Single Source Range Length: 65535 00:17:52.375 Maximum Copy Length: 65535 00:17:52.375 Maximum Source Range Count: 1 00:17:52.375 NGUID/EUI64 Never Reused: No 00:17:52.375 Namespace Write Protected: No 00:17:52.375 Number of LBA Formats: 1 00:17:52.375 Current LBA Format: LBA Format #00 00:17:52.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:52.375 00:17:52.375 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:52.632 [2024-12-14 18:38:08.272314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:57.903 Initializing NVMe Controllers 00:17:57.903 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:57.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:57.903 Initialization complete. Launching workers. 00:17:57.903 ======================================================== 00:17:57.903 Latency(us) 00:17:57.903 Device Information : IOPS MiB/s Average min max 00:17:57.903 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38438.97 150.15 3329.64 915.37 10500.92 00:17:57.903 ======================================================== 00:17:57.903 Total : 38438.97 150.15 3329.64 915.37 10500.92 00:17:57.903 00:17:57.903 [2024-12-14 18:38:13.374101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:57.903 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:58.161 [2024-12-14 18:38:13.699595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.436 Initializing NVMe Controllers 00:18:03.436 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:03.436 Initialization complete. Launching workers. 00:18:03.436 ======================================================== 00:18:03.436 Latency(us) 00:18:03.436 Device Information : IOPS MiB/s Average min max 00:18:03.436 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38301.88 149.62 3341.70 1043.53 10489.37 00:18:03.436 ======================================================== 00:18:03.436 Total : 38301.88 149.62 3341.70 1043.53 10489.37 00:18:03.436 00:18:03.436 [2024-12-14 18:38:18.711341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.436 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:03.436 [2024-12-14 18:38:18.980425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.710 [2024-12-14 18:38:24.109785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.710 Initializing NVMe Controllers 00:18:08.710 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.710 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:08.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:08.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:08.710 Initialization complete. Launching workers. 00:18:08.710 Starting thread on core 2 00:18:08.710 Starting thread on core 3 00:18:08.710 Starting thread on core 1 00:18:08.710 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:08.710 [2024-12-14 18:38:24.443030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:12.018 [2024-12-14 18:38:27.489023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:12.018 Initializing NVMe Controllers 00:18:12.018 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.018 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.018 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:12.018 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:12.018 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:12.018 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:12.018 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:18:12.019 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:12.019 Initialization complete. Launching workers. 00:18:12.019 Starting thread on core 1 with urgent priority queue 00:18:12.019 Starting thread on core 2 with urgent priority queue 00:18:12.019 Starting thread on core 3 with urgent priority queue 00:18:12.019 Starting thread on core 0 with urgent priority queue 00:18:12.019 SPDK bdev Controller (SPDK2 ) core 0: 4999.67 IO/s 20.00 secs/100000 ios 00:18:12.019 SPDK bdev Controller (SPDK2 ) core 1: 4741.00 IO/s 21.09 secs/100000 ios 00:18:12.019 SPDK bdev Controller (SPDK2 ) core 2: 4459.33 IO/s 22.42 secs/100000 ios 00:18:12.019 SPDK bdev Controller (SPDK2 ) core 3: 4787.00 IO/s 20.89 secs/100000 ios 00:18:12.019 ======================================================== 00:18:12.019 00:18:12.019 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:12.278 [2024-12-14 18:38:27.806780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:12.278 Initializing NVMe Controllers 00:18:12.278 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.278 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.278 Namespace ID: 1 size: 0GB 00:18:12.278 Initialization complete. 00:18:12.278 INFO: using host memory buffer for IO 00:18:12.278 Hello world! 00:18:12.278 [2024-12-14 18:38:27.819947] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:12.278 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:12.537 [2024-12-14 18:38:28.130078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:13.474 Initializing NVMe Controllers 00:18:13.474 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:13.474 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:13.474 Initialization complete. Launching workers. 00:18:13.474 submit (in ns) avg, min, max = 5198.8, 3490.0, 5093053.6 00:18:13.474 complete (in ns) avg, min, max = 27363.0, 1850.9, 7087569.1 00:18:13.474 00:18:13.474 Submit histogram 00:18:13.474 ================ 00:18:13.474 Range in us Cumulative Count 00:18:13.474 3.476 - 3.491: 0.0076% ( 1) 00:18:13.474 3.491 - 3.505: 0.1135% ( 14) 00:18:13.474 3.505 - 3.520: 0.5752% ( 61) 00:18:13.474 3.520 - 3.535: 1.8468% ( 168) 00:18:13.474 3.535 - 3.549: 4.8744% ( 400) 00:18:13.474 3.549 - 3.564: 10.0288% ( 681) 00:18:13.474 3.564 - 3.578: 16.7196% ( 884) 00:18:13.474 3.578 - 3.593: 26.8241% ( 1335) 00:18:13.474 3.593 - 3.607: 37.0421% ( 1350) 00:18:13.474 3.607 - 3.622: 45.5646% ( 1126) 00:18:13.474 3.622 - 3.636: 52.5583% ( 924) 00:18:13.474 3.636 - 3.651: 59.0902% ( 863) 00:18:13.474 3.651 - 3.665: 64.0024% ( 649) 00:18:13.474 3.665 - 3.680: 68.5589% ( 602) 00:18:13.474 3.680 - 3.695: 71.8059% ( 429) 00:18:13.474 3.695 - 3.709: 74.4853% ( 354) 00:18:13.474 3.709 - 3.724: 77.0209% ( 335) 00:18:13.474 3.724 - 3.753: 79.8895% ( 379) 00:18:13.474 3.753 - 3.782: 81.9558% ( 273) 00:18:13.474 3.782 - 3.811: 83.5226% ( 207) 00:18:13.474 3.811 - 3.840: 84.7109% ( 157) 00:18:13.474 3.840 - 3.869: 85.8613% ( 152) 00:18:13.474 3.869 - 3.898: 87.2010% ( 177) 00:18:13.474 3.898 - 3.927: 88.7754% ( 208) 00:18:13.474 3.927 - 3.956: 90.2740% ( 198) 00:18:13.474 3.956 - 3.985: 92.1359% ( 246) 00:18:13.474 3.985 - 4.015: 93.4454% ( 173) 00:18:13.474 4.015 - 4.044: 94.6185% ( 155) 00:18:13.474 4.044 - 4.073: 95.2922% ( 89) 00:18:13.474 4.073 - 4.102: 95.7009% ( 54) 00:18:13.474 4.102 - 4.131: 95.9204% ( 29) 00:18:13.474 4.131 - 4.160: 96.1247% ( 27) 00:18:13.474 4.160 - 4.189: 96.2534% ( 17) 00:18:13.474 4.189 - 4.218: 96.3140% ( 8) 00:18:13.474 4.218 - 4.247: 96.3821% ( 9) 00:18:13.475 4.247 - 4.276: 96.4502% ( 9) 00:18:13.475 4.276 - 4.305: 96.4956% ( 6) 00:18:13.475 4.305 - 4.335: 96.5335% ( 5) 00:18:13.475 4.335 - 4.364: 96.6016% ( 9) 00:18:13.475 4.364 - 4.393: 96.6924% ( 12) 00:18:13.475 4.393 - 4.422: 96.8211% ( 17) 00:18:13.475 4.422 - 4.451: 96.9422% ( 16) 00:18:13.475 4.451 - 4.480: 97.0330% ( 12) 00:18:13.475 4.480 - 4.509: 97.1163% ( 11) 00:18:13.475 4.509 - 4.538: 97.2147% ( 13) 00:18:13.475 4.538 - 4.567: 97.2525% ( 5) 00:18:13.475 4.567 - 4.596: 97.2979% ( 6) 00:18:13.475 4.596 - 4.625: 97.3660% ( 9) 00:18:13.475 4.655 - 4.684: 97.3812% ( 2) 00:18:13.475 4.742 - 4.771: 97.3887% ( 1) 00:18:13.475 4.800 - 4.829: 97.4039% ( 2) 00:18:13.475 4.858 - 4.887: 97.4417% ( 5) 00:18:13.475 4.887 - 4.916: 97.4644% ( 3) 00:18:13.475 4.916 - 4.945: 97.4720% ( 1) 00:18:13.475 4.945 - 4.975: 97.4796% ( 1) 00:18:13.475 4.975 - 5.004: 97.4871% ( 1) 00:18:13.475 5.033 - 5.062: 97.4947% ( 1) 00:18:13.475 5.149 - 5.178: 97.5023% ( 1) 00:18:13.475 5.295 - 5.324: 97.5174% ( 2) 00:18:13.475 5.353 - 5.382: 97.5250% ( 1) 00:18:13.475 5.382 - 5.411: 97.5325% ( 1) 00:18:13.475 5.527 - 5.556: 97.5401% ( 1) 00:18:13.475 5.731 - 5.760: 97.5477% ( 1) 00:18:13.475 5.818 - 5.847: 97.5553% ( 1) 00:18:13.475 5.935 - 5.964: 97.5628% ( 1) 00:18:13.475 6.138 - 6.167: 97.5704% ( 1) 00:18:13.475 6.196 - 6.225: 97.5780% ( 1) 00:18:13.475 6.225 - 6.255: 97.5855% ( 1) 00:18:13.475 6.255 - 6.284: 97.5931% ( 1) 00:18:13.475 6.342 - 6.371: 97.6007% ( 1) 00:18:13.475 6.371 - 6.400: 97.6082% ( 1) 00:18:13.475 6.458 - 6.487: 97.6158% ( 1) 00:18:13.475 6.487 - 6.516: 97.6234% ( 1) 00:18:13.475 6.516 - 6.545: 97.6309% ( 1) 00:18:13.475 6.575 - 6.604: 97.6385% ( 1) 00:18:13.475 6.633 - 6.662: 97.6461% ( 1) 00:18:13.475 6.778 - 6.807: 97.6536% ( 1) 00:18:13.475 6.807 - 6.836: 97.6612% ( 1) 00:18:13.475 6.836 - 6.865: 97.6688% ( 1) 00:18:13.475 6.895 - 6.924: 97.6764% ( 1) 00:18:13.475 6.953 - 6.982: 97.6839% ( 1) 00:18:13.475 7.040 - 7.069: 97.6915% ( 1) 00:18:13.475 7.215 - 7.244: 97.6991% ( 1) 00:18:13.475 7.244 - 7.273: 97.7066% ( 1) 00:18:13.475 7.273 - 7.302: 97.7218% ( 2) 00:18:13.475 7.302 - 7.331: 97.7369% ( 2) 00:18:13.475 7.331 - 7.360: 97.7445% ( 1) 00:18:13.475 7.389 - 7.418: 97.7596% ( 2) 00:18:13.475 7.447 - 7.505: 97.7672% ( 1) 00:18:13.475 7.622 - 7.680: 97.7748% ( 1) 00:18:13.475 7.913 - 7.971: 97.7975% ( 3) 00:18:13.475 7.971 - 8.029: 97.8202% ( 3) 00:18:13.475 8.087 - 8.145: 97.8277% ( 1) 00:18:13.475 8.145 - 8.204: 97.8353% ( 1) 00:18:13.475 8.204 - 8.262: 97.8504% ( 2) 00:18:13.475 8.262 - 8.320: 97.8580% ( 1) 00:18:13.475 8.320 - 8.378: 97.8807% ( 3) 00:18:13.475 8.378 - 8.436: 97.9034% ( 3) 00:18:13.475 8.436 - 8.495: 97.9337% ( 4) 00:18:13.475 8.495 - 8.553: 97.9564% ( 3) 00:18:13.475 8.553 - 8.611: 97.9715% ( 2) 00:18:13.475 8.611 - 8.669: 97.9942% ( 3) 00:18:13.475 8.669 - 8.727: 98.0094% ( 2) 00:18:13.475 8.727 - 8.785: 98.0170% ( 1) 00:18:13.475 8.785 - 8.844: 98.0775% ( 8) 00:18:13.475 8.844 - 8.902: 98.1002% ( 3) 00:18:13.475 8.902 - 8.960: 98.1078% ( 1) 00:18:13.475 8.960 - 9.018: 98.1229% ( 2) 00:18:13.475 9.018 - 9.076: 98.1608% ( 5) 00:18:13.475 9.076 - 9.135: 98.1910% ( 4) 00:18:13.475 9.135 - 9.193: 98.1986% ( 1) 00:18:13.475 9.193 - 9.251: 98.2213% ( 3) 00:18:13.475 9.251 - 9.309: 98.2516% ( 4) 00:18:13.475 9.309 - 9.367: 98.2819% ( 4) 00:18:13.475 9.367 - 9.425: 98.3046% ( 3) 00:18:13.475 9.425 - 9.484: 98.3121% ( 1) 00:18:13.475 9.484 - 9.542: 98.3424% ( 4) 00:18:13.475 9.542 - 9.600: 98.3500% ( 1) 00:18:13.475 9.600 - 9.658: 98.3727% ( 3) 00:18:13.475 9.658 - 9.716: 98.3803% ( 1) 00:18:13.475 9.716 - 9.775: 98.3878% ( 1) 00:18:13.475 9.775 - 9.833: 98.3954% ( 1) 00:18:13.475 9.949 - 10.007: 98.4030% ( 1) 00:18:13.475 10.007 - 10.065: 98.4105% ( 1) 00:18:13.475 10.124 - 10.182: 98.4484% ( 5) 00:18:13.475 10.240 - 10.298: 98.4711% ( 3) 00:18:13.475 10.298 - 10.356: 98.4938% ( 3) 00:18:13.475 10.356 - 10.415: 98.5165% ( 3) 00:18:13.475 10.415 - 10.473: 98.5241% ( 1) 00:18:13.475 10.473 - 10.531: 98.5468% ( 3) 00:18:13.475 10.531 - 10.589: 98.5695% ( 3) 00:18:13.475 10.589 - 10.647: 98.5922% ( 3) 00:18:13.475 10.647 - 10.705: 98.6149% ( 3) 00:18:13.475 10.705 - 10.764: 98.6225% ( 1) 00:18:13.475 10.764 - 10.822: 98.6376% ( 2) 00:18:13.475 10.880 - 10.938: 98.6452% ( 1) 00:18:13.475 10.938 - 10.996: 98.6754% ( 4) 00:18:13.475 11.055 - 11.113: 98.7057% ( 4) 00:18:13.475 11.113 - 11.171: 98.7209% ( 2) 00:18:13.475 11.171 - 11.229: 98.7284% ( 1) 00:18:13.475 11.287 - 11.345: 98.7511% ( 3) 00:18:13.475 11.345 - 11.404: 98.7587% ( 1) 00:18:13.475 11.404 - 11.462: 98.7738% ( 2) 00:18:13.475 11.520 - 11.578: 98.7890% ( 2) 00:18:13.475 11.578 - 11.636: 98.7965% ( 1) 00:18:13.475 11.636 - 11.695: 98.8117% ( 2) 00:18:13.475 11.753 - 11.811: 98.8420% ( 4) 00:18:13.475 11.811 - 11.869: 98.8722% ( 4) 00:18:13.475 11.869 - 11.927: 98.8949% ( 3) 00:18:13.475 11.985 - 12.044: 98.9025% ( 1) 00:18:13.475 12.044 - 12.102: 98.9101% ( 1) 00:18:13.475 12.102 - 12.160: 98.9177% ( 1) 00:18:13.475 12.160 - 12.218: 98.9328% ( 2) 00:18:13.475 12.218 - 12.276: 98.9404% ( 1) 00:18:13.475 12.276 - 12.335: 98.9479% ( 1) 00:18:13.475 12.393 - 12.451: 98.9631% ( 2) 00:18:13.475 12.509 - 12.567: 98.9706% ( 1) 00:18:13.475 12.567 - 12.625: 98.9858% ( 2) 00:18:13.475 12.625 - 12.684: 98.9933% ( 1) 00:18:13.475 12.684 - 12.742: 99.0009% ( 1) 00:18:13.475 12.742 - 12.800: 99.0085% ( 1) 00:18:13.475 12.800 - 12.858: 99.0160% ( 1) 00:18:13.475 13.033 - 13.091: 99.0236% ( 1) 00:18:13.475 13.091 - 13.149: 99.0388% ( 2) 00:18:13.475 13.149 - 13.207: 99.0463% ( 1) 00:18:13.475 13.207 - 13.265: 99.0539% ( 1) 00:18:13.475 13.324 - 13.382: 99.0690% ( 2) 00:18:13.475 13.382 - 13.440: 99.0917% ( 3) 00:18:13.475 13.440 - 13.498: 99.0993% ( 1) 00:18:13.475 13.498 - 13.556: 99.1069% ( 1) 00:18:13.475 13.731 - 13.789: 99.1220% ( 2) 00:18:13.475 13.789 - 13.847: 99.1296% ( 1) 00:18:13.475 13.847 - 13.905: 99.1523% ( 3) 00:18:13.475 13.905 - 13.964: 99.1599% ( 1) 00:18:13.475 14.022 - 14.080: 99.1674% ( 1) 00:18:13.475 14.080 - 14.138: 99.1826% ( 2) 00:18:13.475 14.138 - 14.196: 99.1901% ( 1) 00:18:13.475 14.313 - 14.371: 99.2053% ( 2) 00:18:13.475 14.487 - 14.545: 99.2128% ( 1) 00:18:13.475 14.545 - 14.604: 99.2280% ( 2) 00:18:13.475 14.604 - 14.662: 99.2355% ( 1) 00:18:13.475 14.895 - 15.011: 99.2583% ( 3) 00:18:13.475 15.011 - 15.127: 99.2885% ( 4) 00:18:13.475 15.127 - 15.244: 99.3264% ( 5) 00:18:13.475 15.244 - 15.360: 99.3491% ( 3) 00:18:13.475 15.360 - 15.476: 99.3642% ( 2) 00:18:13.475 15.476 - 15.593: 99.3794% ( 2) 00:18:13.475 15.593 - 15.709: 99.4021% ( 3) 00:18:13.475 15.709 - 15.825: 99.4096% ( 1) 00:18:13.475 16.058 - 16.175: 99.4323% ( 3) 00:18:13.475 16.175 - 16.291: 99.4399% ( 1) 00:18:13.475 16.407 - 16.524: 99.4550% ( 2) 00:18:13.475 17.804 - 17.920: 99.4626% ( 1) 00:18:13.475 17.920 - 18.036: 99.4702% ( 1) 00:18:13.475 18.036 - 18.153: 99.4777% ( 1) 00:18:13.475 18.153 - 18.269: 99.4929% ( 2) 00:18:13.475 18.385 - 18.502: 99.5005% ( 1) 00:18:13.475 18.618 - 18.735: 99.5080% ( 1) 00:18:13.475 18.735 - 18.851: 99.5383% ( 4) 00:18:13.475 18.851 - 18.967: 99.5459% ( 1) 00:18:13.475 18.967 - 19.084: 99.5534% ( 1) 00:18:13.475 19.200 - 19.316: 99.5837% ( 4) 00:18:13.475 19.316 - 19.433: 99.6140% ( 4) 00:18:13.475 19.433 - 19.549: 99.6216% ( 1) 00:18:13.475 19.549 - 19.665: 99.6670% ( 6) 00:18:13.475 19.665 - 19.782: 99.7200% ( 7) 00:18:13.475 19.782 - 19.898: 99.7654% ( 6) 00:18:13.475 19.898 - 20.015: 99.7805% ( 2) 00:18:13.475 20.015 - 20.131: 99.7881% ( 1) 00:18:13.475 20.131 - 20.247: 99.8183% ( 4) 00:18:13.475 20.364 - 20.480: 99.8562% ( 5) 00:18:13.475 23.971 - 24.087: 99.8638% ( 1) 00:18:13.475 24.436 - 24.553: 99.8789% ( 2) 00:18:13.475 24.669 - 24.785: 99.8940% ( 2) 00:18:13.475 24.785 - 24.902: 99.9016% ( 1) 00:18:13.475 24.902 - 25.018: 99.9092% ( 1) 00:18:13.475 25.018 - 25.135: 99.9167% ( 1) 00:18:13.475 25.949 - 26.065: 99.9243% ( 1) 00:18:13.475 28.509 - 28.625: 99.9319% ( 1) 00:18:13.475 29.091 - 29.207: 99.9394% ( 1) 00:18:13.475 29.207 - 29.324: 99.9470% ( 1) 00:18:13.475 29.789 - 30.022: 99.9546% ( 1) 00:18:13.475 30.720 - 30.953: 99.9622% ( 1) 00:18:13.475 36.305 - 36.538: 99.9697% ( 1) 00:18:13.475 3961.949 - 3991.738: 99.9773% ( 1) 00:18:13.475 3991.738 - 4021.527: 99.9849% ( 1) 00:18:13.475 4021.527 - 4051.316: 99.9924% ( 1) 00:18:13.475 5064.145 - 5093.935: 100.0000% ( 1) 00:18:13.476 00:18:13.476 Complete histogram 00:18:13.476 ================== 00:18:13.476 Range in us Cumulative Count 00:18:13.476 1.847 - 1.855: 0.0454% ( 6) 00:18:13.476 1.855 - 1.862: 2.8156% ( 366) 00:18:13.476 1.862 - 1.876: 59.8698% ( 7538) 00:18:13.476 1.876 - 1.891: 69.4747% ( 1269) 00:18:13.476 1.891 - 1.905: 70.2013% ( 96) 00:18:13.476 1.905 - 1.920: 73.2365% ( 401) 00:18:13.476 1.920 - 1.935: 87.6854% ( 1909) 00:18:13.476 1.935 - 1.949: 90.5843% ( 383) 00:18:13.476 1.949 - 1.964: 90.9930% ( 54) 00:18:13.476 1.964 - 1.978: 91.9619% ( 128) 00:18:13.476 1.978 - 1.993: 95.1938% ( 427) 00:18:13.476 1.993 - 2.007: 95.7690% ( 76) 00:18:13.476 2.007 - 2.022: 95.8674% ( 13) 00:18:13.476 2.022 - 2.036: 96.0490% ( 24) 00:18:13.476 2.036 - 2.051: 97.0708% ( 135) 00:18:13.476 2.051 - 2.065: 97.4417% ( 49) 00:18:13.476 2.065 - 2.080: 97.5250% ( 11) 00:18:13.476 2.080 - 2.095: 97.5704% ( 6) 00:18:13.476 2.095 - 2.109: 97.6688% ( 13) 00:18:13.476 2.109 - 2.124: 98.1381% ( 62) 00:18:13.476 2.124 - 2.138: 98.2062% ( 9) 00:18:13.476 2.138 - 2.153: 98.2213% ( 2) 00:18:13.476 2.153 - 2.167: 98.2667% ( 6) 00:18:13.735 2.167 - 2.182: 9[2024-12-14 18:38:29.229547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:13.735 8.3348% ( 9) 00:18:13.735 2.182 - 2.196: 98.5241% ( 25) 00:18:13.735 2.196 - 2.211: 98.5392% ( 2) 00:18:13.735 2.211 - 2.225: 98.5468% ( 1) 00:18:13.735 2.225 - 2.240: 98.5695% ( 3) 00:18:13.735 2.240 - 2.255: 98.5771% ( 1) 00:18:13.735 2.255 - 2.269: 98.6225% ( 6) 00:18:13.735 3.055 - 3.069: 98.6300% ( 1) 00:18:13.735 3.171 - 3.185: 98.6527% ( 3) 00:18:13.735 3.215 - 3.229: 98.6679% ( 2) 00:18:13.735 3.229 - 3.244: 98.6830% ( 2) 00:18:13.735 3.258 - 3.273: 98.6982% ( 2) 00:18:13.735 3.273 - 3.287: 98.7057% ( 1) 00:18:13.735 3.287 - 3.302: 98.7209% ( 2) 00:18:13.735 3.316 - 3.331: 98.7360% ( 2) 00:18:13.735 3.331 - 3.345: 98.7511% ( 2) 00:18:13.735 3.345 - 3.360: 98.7663% ( 2) 00:18:13.735 3.360 - 3.375: 98.7738% ( 1) 00:18:13.735 3.447 - 3.462: 98.7814% ( 1) 00:18:13.735 3.462 - 3.476: 98.7965% ( 2) 00:18:13.735 3.505 - 3.520: 98.8041% ( 1) 00:18:13.735 3.651 - 3.665: 98.8117% ( 1) 00:18:13.735 3.695 - 3.709: 98.8193% ( 1) 00:18:13.735 3.709 - 3.724: 98.8268% ( 1) 00:18:13.735 3.724 - 3.753: 98.8344% ( 1) 00:18:13.735 3.811 - 3.840: 98.8420% ( 1) 00:18:13.735 3.869 - 3.898: 98.8495% ( 1) 00:18:13.735 3.898 - 3.927: 98.8571% ( 1) 00:18:13.735 4.044 - 4.073: 98.8722% ( 2) 00:18:13.735 4.218 - 4.247: 98.8798% ( 1) 00:18:13.735 4.247 - 4.276: 98.8949% ( 2) 00:18:13.735 4.276 - 4.305: 98.9025% ( 1) 00:18:13.735 4.596 - 4.625: 98.9101% ( 1) 00:18:13.735 6.342 - 6.371: 98.9252% ( 2) 00:18:13.735 6.458 - 6.487: 98.9328% ( 1) 00:18:13.735 6.545 - 6.575: 98.9404% ( 1) 00:18:13.735 6.633 - 6.662: 98.9479% ( 1) 00:18:13.735 6.691 - 6.720: 98.9555% ( 1) 00:18:13.735 6.778 - 6.807: 98.9631% ( 1) 00:18:13.735 6.807 - 6.836: 98.9706% ( 1) 00:18:13.735 7.098 - 7.127: 98.9782% ( 1) 00:18:13.735 7.185 - 7.215: 98.9858% ( 1) 00:18:13.735 7.302 - 7.331: 98.9933% ( 1) 00:18:13.735 7.447 - 7.505: 99.0160% ( 3) 00:18:13.735 7.855 - 7.913: 99.0236% ( 1) 00:18:13.735 8.669 - 8.727: 99.0388% ( 2) 00:18:13.735 8.785 - 8.844: 99.0463% ( 1) 00:18:13.735 11.753 - 11.811: 99.0539% ( 1) 00:18:13.735 12.044 - 12.102: 99.0615% ( 1) 00:18:13.735 12.742 - 12.800: 99.0690% ( 1) 00:18:13.735 12.858 - 12.916: 99.0766% ( 1) 00:18:13.735 14.138 - 14.196: 99.0842% ( 1) 00:18:13.735 15.942 - 16.058: 99.0917% ( 1) 00:18:13.735 16.291 - 16.407: 99.1069% ( 2) 00:18:13.735 16.407 - 16.524: 99.1144% ( 1) 00:18:13.735 16.873 - 16.989: 99.1220% ( 1) 00:18:13.735 16.989 - 17.105: 99.1296% ( 1) 00:18:13.735 17.338 - 17.455: 99.1447% ( 2) 00:18:13.735 17.455 - 17.571: 99.1599% ( 2) 00:18:13.735 17.571 - 17.687: 99.1750% ( 2) 00:18:13.735 17.687 - 17.804: 99.1826% ( 1) 00:18:13.735 17.804 - 17.920: 99.1977% ( 2) 00:18:13.735 17.920 - 18.036: 99.2507% ( 7) 00:18:13.735 18.036 - 18.153: 99.2658% ( 2) 00:18:13.735 18.153 - 18.269: 99.3112% ( 6) 00:18:13.735 23.505 - 23.622: 99.3188% ( 1) 00:18:13.735 25.251 - 25.367: 99.3264% ( 1) 00:18:13.735 25.949 - 26.065: 99.3339% ( 1) 00:18:13.735 28.276 - 28.393: 99.3415% ( 1) 00:18:13.735 28.742 - 28.858: 99.3491% ( 1) 00:18:13.735 29.324 - 29.440: 99.3566% ( 1) 00:18:13.735 49.804 - 50.036: 99.3642% ( 1) 00:18:13.735 461.731 - 463.593: 99.3718% ( 1) 00:18:13.735 2010.764 - 2025.658: 99.3794% ( 1) 00:18:13.735 3038.487 - 3053.382: 99.3869% ( 1) 00:18:13.735 3902.371 - 3932.160: 99.3945% ( 1) 00:18:13.735 3932.160 - 3961.949: 99.4021% ( 1) 00:18:13.735 3961.949 - 3991.738: 99.4777% ( 10) 00:18:13.735 3991.738 - 4021.527: 99.7578% ( 37) 00:18:13.735 4021.527 - 4051.316: 99.9546% ( 26) 00:18:13.736 4051.316 - 4081.105: 99.9849% ( 4) 00:18:13.736 5987.607 - 6017.396: 99.9924% ( 1) 00:18:13.736 7060.015 - 7089.804: 100.0000% ( 1) 00:18:13.736 00:18:13.736 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:13.736 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:13.736 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:13.736 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:13.736 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:13.995 [ 00:18:13.995 { 00:18:13.995 "allow_any_host": true, 00:18:13.995 "hosts": [], 00:18:13.995 "listen_addresses": [], 00:18:13.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:13.995 "subtype": "Discovery" 00:18:13.995 }, 00:18:13.995 { 00:18:13.995 "allow_any_host": true, 00:18:13.995 "hosts": [], 00:18:13.995 "listen_addresses": [ 00:18:13.995 { 00:18:13.995 "adrfam": "IPv4", 00:18:13.995 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:13.995 "trsvcid": "0", 00:18:13.995 "trtype": "VFIOUSER" 00:18:13.995 } 00:18:13.995 ], 00:18:13.995 "max_cntlid": 65519, 00:18:13.995 "max_namespaces": 32, 00:18:13.995 "min_cntlid": 1, 00:18:13.995 "model_number": "SPDK bdev Controller", 00:18:13.995 "namespaces": [ 00:18:13.995 { 00:18:13.995 "bdev_name": "Malloc1", 00:18:13.995 "name": "Malloc1", 00:18:13.995 "nguid": "21CCAAFCCBFF4416BA0BE30E786E7A73", 00:18:13.995 "nsid": 1, 00:18:13.995 "uuid": "21ccaafc-cbff-4416-ba0b-e30e786e7a73" 00:18:13.995 }, 00:18:13.995 { 00:18:13.995 "bdev_name": "Malloc3", 00:18:13.995 "name": "Malloc3", 00:18:13.995 "nguid": "40D159A46FAA482E9A51FE3BAAF5BF56", 00:18:13.995 "nsid": 2, 00:18:13.995 "uuid": "40d159a4-6faa-482e-9a51-fe3baaf5bf56" 00:18:13.995 } 00:18:13.995 ], 00:18:13.995 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:13.995 "serial_number": "SPDK1", 00:18:13.995 "subtype": "NVMe" 00:18:13.995 }, 00:18:13.995 { 00:18:13.995 "allow_any_host": true, 00:18:13.995 "hosts": [], 00:18:13.995 "listen_addresses": [ 00:18:13.995 { 00:18:13.995 "adrfam": "IPv4", 00:18:13.995 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:13.995 "trsvcid": "0", 00:18:13.995 "trtype": "VFIOUSER" 00:18:13.995 } 00:18:13.995 ], 00:18:13.995 "max_cntlid": 65519, 00:18:13.995 "max_namespaces": 32, 00:18:13.995 "min_cntlid": 1, 00:18:13.995 "model_number": "SPDK bdev Controller", 00:18:13.995 "namespaces": [ 00:18:13.995 { 00:18:13.995 "bdev_name": "Malloc2", 00:18:13.995 "name": "Malloc2", 00:18:13.995 "nguid": "49B28C4CDFB9450F88AF0A46B372646E", 00:18:13.995 "nsid": 1, 00:18:13.995 "uuid": "49b28c4c-dfb9-450f-88af-0a46b372646e" 00:18:13.995 } 00:18:13.995 ], 00:18:13.995 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:13.995 "serial_number": "SPDK2", 00:18:13.995 "subtype": "NVMe" 00:18:13.995 } 00:18:13.995 ] 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=93544 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:13.995 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:14.254 [2024-12-14 18:38:29.774762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:14.254 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.254 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.254 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:14.254 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:14.254 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:14.513 Malloc4 00:18:14.513 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:14.772 [2024-12-14 18:38:30.332334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:14.772 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:14.772 Asynchronous Event Request test 00:18:14.772 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:14.772 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:14.772 Registering asynchronous event callbacks... 00:18:14.772 Starting namespace attribute notice tests for all controllers... 00:18:14.772 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:14.772 aer_cb - Changed Namespace 00:18:14.772 Cleaning up... 00:18:15.031 [ 00:18:15.031 { 00:18:15.031 "allow_any_host": true, 00:18:15.031 "hosts": [], 00:18:15.031 "listen_addresses": [], 00:18:15.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:15.031 "subtype": "Discovery" 00:18:15.031 }, 00:18:15.031 { 00:18:15.031 "allow_any_host": true, 00:18:15.031 "hosts": [], 00:18:15.031 "listen_addresses": [ 00:18:15.031 { 00:18:15.031 "adrfam": "IPv4", 00:18:15.031 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:15.031 "trsvcid": "0", 00:18:15.031 "trtype": "VFIOUSER" 00:18:15.031 } 00:18:15.031 ], 00:18:15.031 "max_cntlid": 65519, 00:18:15.031 "max_namespaces": 32, 00:18:15.031 "min_cntlid": 1, 00:18:15.031 "model_number": "SPDK bdev Controller", 00:18:15.031 "namespaces": [ 00:18:15.031 { 00:18:15.031 "bdev_name": "Malloc1", 00:18:15.031 "name": "Malloc1", 00:18:15.031 "nguid": "21CCAAFCCBFF4416BA0BE30E786E7A73", 00:18:15.031 "nsid": 1, 00:18:15.031 "uuid": "21ccaafc-cbff-4416-ba0b-e30e786e7a73" 00:18:15.031 }, 00:18:15.031 { 00:18:15.031 "bdev_name": "Malloc3", 00:18:15.031 "name": "Malloc3", 00:18:15.031 "nguid": "40D159A46FAA482E9A51FE3BAAF5BF56", 00:18:15.031 "nsid": 2, 00:18:15.031 "uuid": "40d159a4-6faa-482e-9a51-fe3baaf5bf56" 00:18:15.031 } 00:18:15.031 ], 00:18:15.031 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:15.031 "serial_number": "SPDK1", 00:18:15.031 "subtype": "NVMe" 00:18:15.031 }, 00:18:15.031 { 00:18:15.031 "allow_any_host": true, 00:18:15.031 "hosts": [], 00:18:15.031 "listen_addresses": [ 00:18:15.031 { 00:18:15.031 "adrfam": "IPv4", 00:18:15.031 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:15.031 "trsvcid": "0", 00:18:15.031 "trtype": "VFIOUSER" 00:18:15.031 } 00:18:15.031 ], 00:18:15.031 "max_cntlid": 65519, 00:18:15.031 "max_namespaces": 32, 00:18:15.031 "min_cntlid": 1, 00:18:15.031 "model_number": "SPDK bdev Controller", 00:18:15.031 "namespaces": [ 00:18:15.031 { 00:18:15.031 "bdev_name": "Malloc2", 00:18:15.031 "name": "Malloc2", 00:18:15.031 "nguid": "49B28C4CDFB9450F88AF0A46B372646E", 00:18:15.031 "nsid": 1, 00:18:15.031 "uuid": "49b28c4c-dfb9-450f-88af-0a46b372646e" 00:18:15.031 }, 00:18:15.031 { 00:18:15.031 "bdev_name": "Malloc4", 00:18:15.031 "name": "Malloc4", 00:18:15.031 "nguid": "148F8B3E5EF6413E9C5F63FAF91298EC", 00:18:15.031 "nsid": 2, 00:18:15.031 "uuid": "148f8b3e-5ef6-413e-9c5f-63faf91298ec" 00:18:15.031 } 00:18:15.031 ], 00:18:15.031 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:15.031 "serial_number": "SPDK2", 00:18:15.031 "subtype": "NVMe" 00:18:15.031 } 00:18:15.031 ] 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 93544 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 92860 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 92860 ']' 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 92860 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.031 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92860 00:18:15.031 killing process with pid 92860 00:18:15.032 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:15.032 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:15.032 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92860' 00:18:15.032 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 92860 00:18:15.032 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 92860 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=93592 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:15.600 Process pid: 93592 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 93592' 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 93592 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 93592 ']' 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.600 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:15.600 [2024-12-14 18:38:31.172800] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:15.600 [2024-12-14 18:38:31.174023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:15.600 [2024-12-14 18:38:31.174101] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.600 [2024-12-14 18:38:31.312413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.859 [2024-12-14 18:38:31.395300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.859 [2024-12-14 18:38:31.395362] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.859 [2024-12-14 18:38:31.395372] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.859 [2024-12-14 18:38:31.395379] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.859 [2024-12-14 18:38:31.395386] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.859 [2024-12-14 18:38:31.395503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.859 [2024-12-14 18:38:31.396123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.859 [2024-12-14 18:38:31.396693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.859 [2024-12-14 18:38:31.396702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.859 [2024-12-14 18:38:31.522263] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:15.859 [2024-12-14 18:38:31.522714] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:15.859 [2024-12-14 18:38:31.522899] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:15.859 [2024-12-14 18:38:31.523369] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:15.859 [2024-12-14 18:38:31.523593] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:16.427 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.427 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:16.427 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:17.803 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:18.062 Malloc1 00:18:18.062 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:18.321 18:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:18.580 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:18.839 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:18.839 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:18.839 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:19.097 Malloc2 00:18:19.097 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:19.356 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:19.614 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 93592 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 93592 ']' 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 93592 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93592 00:18:19.872 killing process with pid 93592 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93592' 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 93592 00:18:19.872 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 93592 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:20.440 ************************************ 00:18:20.440 END TEST nvmf_vfio_user 00:18:20.440 ************************************ 00:18:20.440 00:18:20.440 real 0m56.264s 00:18:20.440 user 3m34.621s 00:18:20.440 sys 0m4.000s 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.440 ************************************ 00:18:20.440 START TEST nvmf_vfio_user_nvme_compliance 00:18:20.440 ************************************ 00:18:20.440 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:20.440 * Looking for test storage... 00:18:20.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.440 --rc genhtml_branch_coverage=1 00:18:20.440 --rc genhtml_function_coverage=1 00:18:20.440 --rc genhtml_legend=1 00:18:20.440 --rc geninfo_all_blocks=1 00:18:20.440 --rc geninfo_unexecuted_blocks=1 00:18:20.440 00:18:20.440 ' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.440 --rc genhtml_branch_coverage=1 00:18:20.440 --rc genhtml_function_coverage=1 00:18:20.440 --rc genhtml_legend=1 00:18:20.440 --rc geninfo_all_blocks=1 00:18:20.440 --rc geninfo_unexecuted_blocks=1 00:18:20.440 00:18:20.440 ' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.440 --rc genhtml_branch_coverage=1 00:18:20.440 --rc genhtml_function_coverage=1 00:18:20.440 --rc genhtml_legend=1 00:18:20.440 --rc geninfo_all_blocks=1 00:18:20.440 --rc geninfo_unexecuted_blocks=1 00:18:20.440 00:18:20.440 ' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.440 --rc genhtml_branch_coverage=1 00:18:20.440 --rc genhtml_function_coverage=1 00:18:20.440 --rc genhtml_legend=1 00:18:20.440 --rc geninfo_all_blocks=1 00:18:20.440 --rc geninfo_unexecuted_blocks=1 00:18:20.440 00:18:20.440 ' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.440 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=93791 00:18:20.441 Process pid: 93791 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 93791' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 93791 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 93791 ']' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.441 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-12-14 18:38:36.218493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:20.700 [2024-12-14 18:38:36.218627] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.700 [2024-12-14 18:38:36.356491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.700 [2024-12-14 18:38:36.433819] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.700 [2024-12-14 18:38:36.433894] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.700 [2024-12-14 18:38:36.433904] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.700 [2024-12-14 18:38:36.433911] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.700 [2024-12-14 18:38:36.433918] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.700 [2024-12-14 18:38:36.435287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.700 [2024-12-14 18:38:36.435457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.700 [2024-12-14 18:38:36.435460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.959 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.959 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:20.959 18:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:21.895 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.896 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:22.154 malloc0 00:18:22.154 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.154 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:22.154 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.154 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:22.154 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.155 18:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:22.155 00:18:22.155 00:18:22.155 CUnit - A unit testing framework for C - Version 2.1-3 00:18:22.155 http://cunit.sourceforge.net/ 00:18:22.155 00:18:22.155 00:18:22.155 Suite: nvme_compliance 00:18:22.155 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-14 18:38:37.895987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.155 [2024-12-14 18:38:37.897408] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:22.155 [2024-12-14 18:38:37.897433] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:22.155 [2024-12-14 18:38:37.897442] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:22.155 [2024-12-14 18:38:37.899021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.413 passed 00:18:22.413 Test: admin_identify_ctrlr_verify_fused ...[2024-12-14 18:38:37.984437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.413 [2024-12-14 18:38:37.987456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.413 passed 00:18:22.413 Test: admin_identify_ns ...[2024-12-14 18:38:38.073718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.413 [2024-12-14 18:38:38.139639] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:22.413 [2024-12-14 18:38:38.147620] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:22.672 [2024-12-14 18:38:38.168786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.672 passed 00:18:22.672 Test: admin_get_features_mandatory_features ...[2024-12-14 18:38:38.249450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.672 [2024-12-14 18:38:38.252462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.672 passed 00:18:22.672 Test: admin_get_features_optional_features ...[2024-12-14 18:38:38.332861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.672 [2024-12-14 18:38:38.335873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.672 passed 00:18:22.672 Test: admin_set_features_number_of_queues ...[2024-12-14 18:38:38.414475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.947 [2024-12-14 18:38:38.520815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.947 passed 00:18:22.947 Test: admin_get_log_page_mandatory_logs ...[2024-12-14 18:38:38.602472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:22.947 [2024-12-14 18:38:38.607495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:22.947 passed 00:18:23.222 Test: admin_get_log_page_with_lpo ...[2024-12-14 18:38:38.689841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.222 [2024-12-14 18:38:38.757615] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:23.222 [2024-12-14 18:38:38.769736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.222 passed 00:18:23.222 Test: fabric_property_get ...[2024-12-14 18:38:38.852370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.222 [2024-12-14 18:38:38.853706] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:23.222 [2024-12-14 18:38:38.855389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.222 passed 00:18:23.222 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-14 18:38:38.933767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.222 [2024-12-14 18:38:38.935079] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:23.222 [2024-12-14 18:38:38.936781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.222 passed 00:18:23.481 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-14 18:38:39.019461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.481 [2024-12-14 18:38:39.103646] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:23.481 [2024-12-14 18:38:39.119641] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:23.481 [2024-12-14 18:38:39.124743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.481 passed 00:18:23.481 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-14 18:38:39.203205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.481 [2024-12-14 18:38:39.204480] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:23.481 [2024-12-14 18:38:39.206234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.740 passed 00:18:23.740 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-14 18:38:39.285899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.740 [2024-12-14 18:38:39.367610] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:23.740 [2024-12-14 18:38:39.391643] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:23.740 [2024-12-14 18:38:39.396744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.740 passed 00:18:23.740 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-14 18:38:39.474916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.740 [2024-12-14 18:38:39.476200] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:23.740 [2024-12-14 18:38:39.476252] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:23.740 [2024-12-14 18:38:39.477929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.998 passed 00:18:23.998 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-14 18:38:39.559718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:23.998 [2024-12-14 18:38:39.655669] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:23.998 [2024-12-14 18:38:39.663613] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:23.998 [2024-12-14 18:38:39.671614] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:23.998 [2024-12-14 18:38:39.679647] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:23.998 [2024-12-14 18:38:39.708805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:23.998 passed 00:18:24.257 Test: admin_create_io_sq_verify_pc ...[2024-12-14 18:38:39.790792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:24.257 [2024-12-14 18:38:39.809625] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:24.257 [2024-12-14 18:38:39.829182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:24.258 passed 00:18:24.258 Test: admin_create_io_qp_max_qps ...[2024-12-14 18:38:39.911544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.634 [2024-12-14 18:38:40.995665] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:25.634 [2024-12-14 18:38:41.365078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.893 passed 00:18:25.893 Test: admin_create_io_sq_shared_cq ...[2024-12-14 18:38:41.445866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.893 [2024-12-14 18:38:41.575646] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:25.893 [2024-12-14 18:38:41.612722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.893 passed 00:18:25.893 00:18:25.893 Run Summary: Type Total Ran Passed Failed Inactive 00:18:25.893 suites 1 1 n/a 0 0 00:18:25.893 tests 18 18 18 0 0 00:18:25.893 asserts 360 360 360 0 n/a 00:18:25.893 00:18:25.893 Elapsed time = 1.530 seconds 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 93791 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 93791 ']' 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 93791 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93791 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.152 killing process with pid 93791 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93791' 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 93791 00:18:26.152 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 93791 00:18:26.411 18:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:26.411 00:18:26.411 real 0m6.059s 00:18:26.411 user 0m16.536s 00:18:26.411 sys 0m0.568s 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.411 ************************************ 00:18:26.411 END TEST nvmf_vfio_user_nvme_compliance 00:18:26.411 ************************************ 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.411 ************************************ 00:18:26.411 START TEST nvmf_vfio_user_fuzz 00:18:26.411 ************************************ 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:26.411 * Looking for test storage... 00:18:26.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.411 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.671 --rc genhtml_branch_coverage=1 00:18:26.671 --rc genhtml_function_coverage=1 00:18:26.671 --rc genhtml_legend=1 00:18:26.671 --rc geninfo_all_blocks=1 00:18:26.671 --rc geninfo_unexecuted_blocks=1 00:18:26.671 00:18:26.671 ' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.671 --rc genhtml_branch_coverage=1 00:18:26.671 --rc genhtml_function_coverage=1 00:18:26.671 --rc genhtml_legend=1 00:18:26.671 --rc geninfo_all_blocks=1 00:18:26.671 --rc geninfo_unexecuted_blocks=1 00:18:26.671 00:18:26.671 ' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.671 --rc genhtml_branch_coverage=1 00:18:26.671 --rc genhtml_function_coverage=1 00:18:26.671 --rc genhtml_legend=1 00:18:26.671 --rc geninfo_all_blocks=1 00:18:26.671 --rc geninfo_unexecuted_blocks=1 00:18:26.671 00:18:26.671 ' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.671 --rc genhtml_branch_coverage=1 00:18:26.671 --rc genhtml_function_coverage=1 00:18:26.671 --rc genhtml_legend=1 00:18:26.671 --rc geninfo_all_blocks=1 00:18:26.671 --rc geninfo_unexecuted_blocks=1 00:18:26.671 00:18:26.671 ' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.671 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=93938 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 93938' 00:18:26.672 Process pid: 93938 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 93938 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 93938 ']' 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.672 18:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:27.606 18:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.606 18:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:27.606 18:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:28.543 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:28.543 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.544 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:28.802 malloc0 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.802 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:28.803 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:29.062 Shutting down the fuzz application 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 93938 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 93938 ']' 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 93938 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93938 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.062 killing process with pid 93938 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93938' 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 93938 00:18:29.062 18:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 93938 00:18:29.321 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:29.584 00:18:29.584 real 0m3.020s 00:18:29.584 user 0m3.148s 00:18:29.584 sys 0m0.501s 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.584 ************************************ 00:18:29.584 END TEST nvmf_vfio_user_fuzz 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:29.584 ************************************ 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:29.584 ************************************ 00:18:29.584 START TEST nvmf_auth_target 00:18:29.584 ************************************ 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:29.584 * Looking for test storage... 00:18:29.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:29.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.584 --rc genhtml_branch_coverage=1 00:18:29.584 --rc genhtml_function_coverage=1 00:18:29.584 --rc genhtml_legend=1 00:18:29.584 --rc geninfo_all_blocks=1 00:18:29.584 --rc geninfo_unexecuted_blocks=1 00:18:29.584 00:18:29.584 ' 00:18:29.584 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:29.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.584 --rc genhtml_branch_coverage=1 00:18:29.584 --rc genhtml_function_coverage=1 00:18:29.584 --rc genhtml_legend=1 00:18:29.584 --rc geninfo_all_blocks=1 00:18:29.584 --rc geninfo_unexecuted_blocks=1 00:18:29.585 00:18:29.585 ' 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:29.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.585 --rc genhtml_branch_coverage=1 00:18:29.585 --rc genhtml_function_coverage=1 00:18:29.585 --rc genhtml_legend=1 00:18:29.585 --rc geninfo_all_blocks=1 00:18:29.585 --rc geninfo_unexecuted_blocks=1 00:18:29.585 00:18:29.585 ' 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:29.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.585 --rc genhtml_branch_coverage=1 00:18:29.585 --rc genhtml_function_coverage=1 00:18:29.585 --rc genhtml_legend=1 00:18:29.585 --rc geninfo_all_blocks=1 00:18:29.585 --rc geninfo_unexecuted_blocks=1 00:18:29.585 00:18:29.585 ' 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:29.585 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:29.847 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:29.847 Cannot find device "nvmf_init_br" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:29.847 Cannot find device "nvmf_init_br2" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:29.847 Cannot find device "nvmf_tgt_br" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.847 Cannot find device "nvmf_tgt_br2" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:29.847 Cannot find device "nvmf_init_br" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:29.847 Cannot find device "nvmf_init_br2" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:29.847 Cannot find device "nvmf_tgt_br" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:29.847 Cannot find device "nvmf_tgt_br2" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:29.847 Cannot find device "nvmf_br" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:29.847 Cannot find device "nvmf_init_if" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:29.847 Cannot find device "nvmf_init_if2" 00:18:29.847 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:29.848 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:30.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:18:30.106 00:18:30.106 --- 10.0.0.3 ping statistics --- 00:18:30.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.106 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:30.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:30.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:18:30.106 00:18:30.106 --- 10.0.0.4 ping statistics --- 00:18:30.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.106 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:30.106 00:18:30.106 --- 10.0.0.1 ping statistics --- 00:18:30.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.106 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:30.106 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:30.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:30.106 00:18:30.106 --- 10.0.0.2 ping statistics --- 00:18:30.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.107 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=94187 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 94187 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94187 ']' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.107 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=94219 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=51079933bf0f388bf5eb563b5a9781fab02d8449a46286f3 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.xmg 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 51079933bf0f388bf5eb563b5a9781fab02d8449a46286f3 0 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 51079933bf0f388bf5eb563b5a9781fab02d8449a46286f3 0 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=51079933bf0f388bf5eb563b5a9781fab02d8449a46286f3 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.xmg 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.xmg 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.xmg 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.674 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=615eaca706a37ec3d35f86e10a42a2d5ebae1ef26ff73b368dc13033fd4cc03f 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.oid 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 615eaca706a37ec3d35f86e10a42a2d5ebae1ef26ff73b368dc13033fd4cc03f 3 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 615eaca706a37ec3d35f86e10a42a2d5ebae1ef26ff73b368dc13033fd4cc03f 3 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=615eaca706a37ec3d35f86e10a42a2d5ebae1ef26ff73b368dc13033fd4cc03f 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.oid 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.oid 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oid 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:30.675 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=973b92616f2ef566e74a4e7e09d9a540 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Zu7 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 973b92616f2ef566e74a4e7e09d9a540 1 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 973b92616f2ef566e74a4e7e09d9a540 1 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=973b92616f2ef566e74a4e7e09d9a540 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Zu7 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Zu7 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Zu7 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=77872d2b3a32cb9607a853c32b4a53b7035030ad4bd724ce 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.obU 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 77872d2b3a32cb9607a853c32b4a53b7035030ad4bd724ce 2 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 77872d2b3a32cb9607a853c32b4a53b7035030ad4bd724ce 2 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=77872d2b3a32cb9607a853c32b4a53b7035030ad4bd724ce 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.obU 00:18:30.934 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.obU 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.obU 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5073114f34cf897c30fbe9b45b626a28969363cec8fc2699 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.T5z 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5073114f34cf897c30fbe9b45b626a28969363cec8fc2699 2 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5073114f34cf897c30fbe9b45b626a28969363cec8fc2699 2 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5073114f34cf897c30fbe9b45b626a28969363cec8fc2699 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.T5z 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.T5z 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.T5z 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=ffd570c8dd2928dec97e8ee1dd2b86cc 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.auT 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key ffd570c8dd2928dec97e8ee1dd2b86cc 1 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 ffd570c8dd2928dec97e8ee1dd2b86cc 1 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=ffd570c8dd2928dec97e8ee1dd2b86cc 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.auT 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.auT 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.auT 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e6602750c05845a04f41ffdbfc156be8401e2d60f91238e73c1cff785109e2a6 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.X77 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e6602750c05845a04f41ffdbfc156be8401e2d60f91238e73c1cff785109e2a6 3 00:18:30.935 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e6602750c05845a04f41ffdbfc156be8401e2d60f91238e73c1cff785109e2a6 3 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e6602750c05845a04f41ffdbfc156be8401e2d60f91238e73c1cff785109e2a6 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.X77 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.X77 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.X77 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:31.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 94187 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94187 ']' 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.194 18:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 94219 /var/tmp/host.sock 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94219 ']' 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:31.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.453 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xmg 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xmg 00:18:31.712 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xmg 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oid ]] 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oid 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oid 00:18:31.971 18:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oid 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zu7 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Zu7 00:18:32.539 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Zu7 00:18:32.797 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.obU ]] 00:18:32.797 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.obU 00:18:32.797 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.797 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.797 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.798 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.obU 00:18:32.798 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.obU 00:18:33.056 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.T5z 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.T5z 00:18:33.057 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.T5z 00:18:33.315 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.auT ]] 00:18:33.315 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.auT 00:18:33.316 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.316 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.316 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.316 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.auT 00:18:33.316 18:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.auT 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.X77 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.X77 00:18:33.574 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.X77 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.833 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.092 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.350 00:18:34.350 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.350 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.350 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.918 { 00:18:34.918 "auth": { 00:18:34.918 "dhgroup": "null", 00:18:34.918 "digest": "sha256", 00:18:34.918 "state": "completed" 00:18:34.918 }, 00:18:34.918 "cntlid": 1, 00:18:34.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:34.918 "listen_address": { 00:18:34.918 "adrfam": "IPv4", 00:18:34.918 "traddr": "10.0.0.3", 00:18:34.918 "trsvcid": "4420", 00:18:34.918 "trtype": "TCP" 00:18:34.918 }, 00:18:34.918 "peer_address": { 00:18:34.918 "adrfam": "IPv4", 00:18:34.918 "traddr": "10.0.0.1", 00:18:34.918 "trsvcid": "57656", 00:18:34.918 "trtype": "TCP" 00:18:34.918 }, 00:18:34.918 "qid": 0, 00:18:34.918 "state": "enabled", 00:18:34.918 "thread": "nvmf_tgt_poll_group_000" 00:18:34.918 } 00:18:34.918 ]' 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.918 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.177 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:35.177 18:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.367 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.368 18:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.626 00:18:39.626 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.626 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.626 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.883 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.883 { 00:18:39.883 "auth": { 00:18:39.883 "dhgroup": "null", 00:18:39.883 "digest": "sha256", 00:18:39.883 "state": "completed" 00:18:39.883 }, 00:18:39.883 "cntlid": 3, 00:18:39.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:39.883 "listen_address": { 00:18:39.884 "adrfam": "IPv4", 00:18:39.884 "traddr": "10.0.0.3", 00:18:39.884 "trsvcid": "4420", 00:18:39.884 "trtype": "TCP" 00:18:39.884 }, 00:18:39.884 "peer_address": { 00:18:39.884 "adrfam": "IPv4", 00:18:39.884 "traddr": "10.0.0.1", 00:18:39.884 "trsvcid": "52218", 00:18:39.884 "trtype": "TCP" 00:18:39.884 }, 00:18:39.884 "qid": 0, 00:18:39.884 "state": "enabled", 00:18:39.884 "thread": "nvmf_tgt_poll_group_000" 00:18:39.884 } 00:18:39.884 ]' 00:18:39.884 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.884 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.884 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.884 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.884 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.142 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.142 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.142 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.142 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:40.142 18:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.078 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.337 18:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.613 00:18:41.613 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.613 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.613 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.872 { 00:18:41.872 "auth": { 00:18:41.872 "dhgroup": "null", 00:18:41.872 "digest": "sha256", 00:18:41.872 "state": "completed" 00:18:41.872 }, 00:18:41.872 "cntlid": 5, 00:18:41.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:41.872 "listen_address": { 00:18:41.872 "adrfam": "IPv4", 00:18:41.872 "traddr": "10.0.0.3", 00:18:41.872 "trsvcid": "4420", 00:18:41.872 "trtype": "TCP" 00:18:41.872 }, 00:18:41.872 "peer_address": { 00:18:41.872 "adrfam": "IPv4", 00:18:41.872 "traddr": "10.0.0.1", 00:18:41.872 "trsvcid": "52256", 00:18:41.872 "trtype": "TCP" 00:18:41.872 }, 00:18:41.872 "qid": 0, 00:18:41.872 "state": "enabled", 00:18:41.872 "thread": "nvmf_tgt_poll_group_000" 00:18:41.872 } 00:18:41.872 ]' 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.872 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.131 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.131 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.131 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.391 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:42.391 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.959 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.219 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.479 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.479 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.479 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.479 18:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.738 00:18:43.738 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.738 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.738 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.997 { 00:18:43.997 "auth": { 00:18:43.997 "dhgroup": "null", 00:18:43.997 "digest": "sha256", 00:18:43.997 "state": "completed" 00:18:43.997 }, 00:18:43.997 "cntlid": 7, 00:18:43.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:43.997 "listen_address": { 00:18:43.997 "adrfam": "IPv4", 00:18:43.997 "traddr": "10.0.0.3", 00:18:43.997 "trsvcid": "4420", 00:18:43.997 "trtype": "TCP" 00:18:43.997 }, 00:18:43.997 "peer_address": { 00:18:43.997 "adrfam": "IPv4", 00:18:43.997 "traddr": "10.0.0.1", 00:18:43.997 "trsvcid": "52280", 00:18:43.997 "trtype": "TCP" 00:18:43.997 }, 00:18:43.997 "qid": 0, 00:18:43.997 "state": "enabled", 00:18:43.997 "thread": "nvmf_tgt_poll_group_000" 00:18:43.997 } 00:18:43.997 ]' 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.997 18:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.564 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:18:44.564 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.132 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.391 18:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.650 00:18:45.650 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.650 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.650 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.909 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.909 { 00:18:45.910 "auth": { 00:18:45.910 "dhgroup": "ffdhe2048", 00:18:45.910 "digest": "sha256", 00:18:45.910 "state": "completed" 00:18:45.910 }, 00:18:45.910 "cntlid": 9, 00:18:45.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:45.910 "listen_address": { 00:18:45.910 "adrfam": "IPv4", 00:18:45.910 "traddr": "10.0.0.3", 00:18:45.910 "trsvcid": "4420", 00:18:45.910 "trtype": "TCP" 00:18:45.910 }, 00:18:45.910 "peer_address": { 00:18:45.910 "adrfam": "IPv4", 00:18:45.910 "traddr": "10.0.0.1", 00:18:45.910 "trsvcid": "52308", 00:18:45.910 "trtype": "TCP" 00:18:45.910 }, 00:18:45.910 "qid": 0, 00:18:45.910 "state": "enabled", 00:18:45.910 "thread": "nvmf_tgt_poll_group_000" 00:18:45.910 } 00:18:45.910 ]' 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.910 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.477 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:46.477 18:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.045 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.304 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.305 18:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.872 00:18:47.872 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.872 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.872 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.132 { 00:18:48.132 "auth": { 00:18:48.132 "dhgroup": "ffdhe2048", 00:18:48.132 "digest": "sha256", 00:18:48.132 "state": "completed" 00:18:48.132 }, 00:18:48.132 "cntlid": 11, 00:18:48.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:48.132 "listen_address": { 00:18:48.132 "adrfam": "IPv4", 00:18:48.132 "traddr": "10.0.0.3", 00:18:48.132 "trsvcid": "4420", 00:18:48.132 "trtype": "TCP" 00:18:48.132 }, 00:18:48.132 "peer_address": { 00:18:48.132 "adrfam": "IPv4", 00:18:48.132 "traddr": "10.0.0.1", 00:18:48.132 "trsvcid": "38880", 00:18:48.132 "trtype": "TCP" 00:18:48.132 }, 00:18:48.132 "qid": 0, 00:18:48.132 "state": "enabled", 00:18:48.132 "thread": "nvmf_tgt_poll_group_000" 00:18:48.132 } 00:18:48.132 ]' 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.132 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.699 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:48.699 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.267 18:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.526 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.785 00:18:49.785 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.785 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.785 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.044 { 00:18:50.044 "auth": { 00:18:50.044 "dhgroup": "ffdhe2048", 00:18:50.044 "digest": "sha256", 00:18:50.044 "state": "completed" 00:18:50.044 }, 00:18:50.044 "cntlid": 13, 00:18:50.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:50.044 "listen_address": { 00:18:50.044 "adrfam": "IPv4", 00:18:50.044 "traddr": "10.0.0.3", 00:18:50.044 "trsvcid": "4420", 00:18:50.044 "trtype": "TCP" 00:18:50.044 }, 00:18:50.044 "peer_address": { 00:18:50.044 "adrfam": "IPv4", 00:18:50.044 "traddr": "10.0.0.1", 00:18:50.044 "trsvcid": "38906", 00:18:50.044 "trtype": "TCP" 00:18:50.044 }, 00:18:50.044 "qid": 0, 00:18:50.044 "state": "enabled", 00:18:50.044 "thread": "nvmf_tgt_poll_group_000" 00:18:50.044 } 00:18:50.044 ]' 00:18:50.044 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.303 18:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.562 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:50.562 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.130 18:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.388 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.389 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.389 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.970 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.970 { 00:18:51.970 "auth": { 00:18:51.970 "dhgroup": "ffdhe2048", 00:18:51.970 "digest": "sha256", 00:18:51.970 "state": "completed" 00:18:51.970 }, 00:18:51.970 "cntlid": 15, 00:18:51.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:51.970 "listen_address": { 00:18:51.970 "adrfam": "IPv4", 00:18:51.970 "traddr": "10.0.0.3", 00:18:51.970 "trsvcid": "4420", 00:18:51.970 "trtype": "TCP" 00:18:51.970 }, 00:18:51.970 "peer_address": { 00:18:51.970 "adrfam": "IPv4", 00:18:51.970 "traddr": "10.0.0.1", 00:18:51.970 "trsvcid": "38942", 00:18:51.970 "trtype": "TCP" 00:18:51.970 }, 00:18:51.970 "qid": 0, 00:18:51.970 "state": "enabled", 00:18:51.970 "thread": "nvmf_tgt_poll_group_000" 00:18:51.970 } 00:18:51.970 ]' 00:18:51.970 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.243 18:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.502 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:18:52.502 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:18:53.069 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.328 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.587 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.845 00:18:53.845 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.845 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.845 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.413 { 00:18:54.413 "auth": { 00:18:54.413 "dhgroup": "ffdhe3072", 00:18:54.413 "digest": "sha256", 00:18:54.413 "state": "completed" 00:18:54.413 }, 00:18:54.413 "cntlid": 17, 00:18:54.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:54.413 "listen_address": { 00:18:54.413 "adrfam": "IPv4", 00:18:54.413 "traddr": "10.0.0.3", 00:18:54.413 "trsvcid": "4420", 00:18:54.413 "trtype": "TCP" 00:18:54.413 }, 00:18:54.413 "peer_address": { 00:18:54.413 "adrfam": "IPv4", 00:18:54.413 "traddr": "10.0.0.1", 00:18:54.413 "trsvcid": "38962", 00:18:54.413 "trtype": "TCP" 00:18:54.413 }, 00:18:54.413 "qid": 0, 00:18:54.413 "state": "enabled", 00:18:54.413 "thread": "nvmf_tgt_poll_group_000" 00:18:54.413 } 00:18:54.413 ]' 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.413 18:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.413 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.413 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.413 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.672 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:54.672 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.240 18:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.498 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.756 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.756 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.756 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.756 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.015 00:18:56.015 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.015 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.015 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.274 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.275 { 00:18:56.275 "auth": { 00:18:56.275 "dhgroup": "ffdhe3072", 00:18:56.275 "digest": "sha256", 00:18:56.275 "state": "completed" 00:18:56.275 }, 00:18:56.275 "cntlid": 19, 00:18:56.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:56.275 "listen_address": { 00:18:56.275 "adrfam": "IPv4", 00:18:56.275 "traddr": "10.0.0.3", 00:18:56.275 "trsvcid": "4420", 00:18:56.275 "trtype": "TCP" 00:18:56.275 }, 00:18:56.275 "peer_address": { 00:18:56.275 "adrfam": "IPv4", 00:18:56.275 "traddr": "10.0.0.1", 00:18:56.275 "trsvcid": "36948", 00:18:56.275 "trtype": "TCP" 00:18:56.275 }, 00:18:56.275 "qid": 0, 00:18:56.275 "state": "enabled", 00:18:56.275 "thread": "nvmf_tgt_poll_group_000" 00:18:56.275 } 00:18:56.275 ]' 00:18:56.275 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.534 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.792 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:56.792 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.361 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.620 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.879 00:18:57.879 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.879 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.879 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.446 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.446 { 00:18:58.446 "auth": { 00:18:58.447 "dhgroup": "ffdhe3072", 00:18:58.447 "digest": "sha256", 00:18:58.447 "state": "completed" 00:18:58.447 }, 00:18:58.447 "cntlid": 21, 00:18:58.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:18:58.447 "listen_address": { 00:18:58.447 "adrfam": "IPv4", 00:18:58.447 "traddr": "10.0.0.3", 00:18:58.447 "trsvcid": "4420", 00:18:58.447 "trtype": "TCP" 00:18:58.447 }, 00:18:58.447 "peer_address": { 00:18:58.447 "adrfam": "IPv4", 00:18:58.447 "traddr": "10.0.0.1", 00:18:58.447 "trsvcid": "36970", 00:18:58.447 "trtype": "TCP" 00:18:58.447 }, 00:18:58.447 "qid": 0, 00:18:58.447 "state": "enabled", 00:18:58.447 "thread": "nvmf_tgt_poll_group_000" 00:18:58.447 } 00:18:58.447 ]' 00:18:58.447 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.447 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.447 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.447 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.447 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.447 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.447 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.447 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.705 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:58.705 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:18:59.646 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.646 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:18:59.646 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.647 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.216 00:19:00.216 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.216 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.216 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.475 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.475 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.475 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.475 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.475 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.475 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.475 { 00:19:00.475 "auth": { 00:19:00.475 "dhgroup": "ffdhe3072", 00:19:00.475 "digest": "sha256", 00:19:00.476 "state": "completed" 00:19:00.476 }, 00:19:00.476 "cntlid": 23, 00:19:00.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:00.476 "listen_address": { 00:19:00.476 "adrfam": "IPv4", 00:19:00.476 "traddr": "10.0.0.3", 00:19:00.476 "trsvcid": "4420", 00:19:00.476 "trtype": "TCP" 00:19:00.476 }, 00:19:00.476 "peer_address": { 00:19:00.476 "adrfam": "IPv4", 00:19:00.476 "traddr": "10.0.0.1", 00:19:00.476 "trsvcid": "36994", 00:19:00.476 "trtype": "TCP" 00:19:00.476 }, 00:19:00.476 "qid": 0, 00:19:00.476 "state": "enabled", 00:19:00.476 "thread": "nvmf_tgt_poll_group_000" 00:19:00.476 } 00:19:00.476 ]' 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.476 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.735 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:00.735 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.672 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.673 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.673 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.673 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.931 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:01.931 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.932 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.190 00:19:02.190 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.190 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.190 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.758 { 00:19:02.758 "auth": { 00:19:02.758 "dhgroup": "ffdhe4096", 00:19:02.758 "digest": "sha256", 00:19:02.758 "state": "completed" 00:19:02.758 }, 00:19:02.758 "cntlid": 25, 00:19:02.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:02.758 "listen_address": { 00:19:02.758 "adrfam": "IPv4", 00:19:02.758 "traddr": "10.0.0.3", 00:19:02.758 "trsvcid": "4420", 00:19:02.758 "trtype": "TCP" 00:19:02.758 }, 00:19:02.758 "peer_address": { 00:19:02.758 "adrfam": "IPv4", 00:19:02.758 "traddr": "10.0.0.1", 00:19:02.758 "trsvcid": "37024", 00:19:02.758 "trtype": "TCP" 00:19:02.758 }, 00:19:02.758 "qid": 0, 00:19:02.758 "state": "enabled", 00:19:02.758 "thread": "nvmf_tgt_poll_group_000" 00:19:02.758 } 00:19:02.758 ]' 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.758 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.017 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:03.017 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.652 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.911 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.478 00:19:04.478 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.478 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.478 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.736 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.736 { 00:19:04.736 "auth": { 00:19:04.736 "dhgroup": "ffdhe4096", 00:19:04.736 "digest": "sha256", 00:19:04.736 "state": "completed" 00:19:04.736 }, 00:19:04.736 "cntlid": 27, 00:19:04.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:04.736 "listen_address": { 00:19:04.736 "adrfam": "IPv4", 00:19:04.736 "traddr": "10.0.0.3", 00:19:04.736 "trsvcid": "4420", 00:19:04.736 "trtype": "TCP" 00:19:04.736 }, 00:19:04.736 "peer_address": { 00:19:04.736 "adrfam": "IPv4", 00:19:04.736 "traddr": "10.0.0.1", 00:19:04.736 "trsvcid": "37056", 00:19:04.736 "trtype": "TCP" 00:19:04.736 }, 00:19:04.737 "qid": 0, 00:19:04.737 "state": "enabled", 00:19:04.737 "thread": "nvmf_tgt_poll_group_000" 00:19:04.737 } 00:19:04.737 ]' 00:19:04.737 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.737 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.737 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.737 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.737 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.996 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.996 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.996 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.255 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:05.255 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.821 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.080 18:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.647 00:19:06.647 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.647 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.647 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.906 { 00:19:06.906 "auth": { 00:19:06.906 "dhgroup": "ffdhe4096", 00:19:06.906 "digest": "sha256", 00:19:06.906 "state": "completed" 00:19:06.906 }, 00:19:06.906 "cntlid": 29, 00:19:06.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:06.906 "listen_address": { 00:19:06.906 "adrfam": "IPv4", 00:19:06.906 "traddr": "10.0.0.3", 00:19:06.906 "trsvcid": "4420", 00:19:06.906 "trtype": "TCP" 00:19:06.906 }, 00:19:06.906 "peer_address": { 00:19:06.906 "adrfam": "IPv4", 00:19:06.906 "traddr": "10.0.0.1", 00:19:06.906 "trsvcid": "40026", 00:19:06.906 "trtype": "TCP" 00:19:06.906 }, 00:19:06.906 "qid": 0, 00:19:06.906 "state": "enabled", 00:19:06.906 "thread": "nvmf_tgt_poll_group_000" 00:19:06.906 } 00:19:06.906 ]' 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.906 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.907 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.907 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.907 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.907 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.165 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:07.165 18:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:08.102 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.103 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.362 18:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.621 00:19:08.621 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.621 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.621 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.880 { 00:19:08.880 "auth": { 00:19:08.880 "dhgroup": "ffdhe4096", 00:19:08.880 "digest": "sha256", 00:19:08.880 "state": "completed" 00:19:08.880 }, 00:19:08.880 "cntlid": 31, 00:19:08.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:08.880 "listen_address": { 00:19:08.880 "adrfam": "IPv4", 00:19:08.880 "traddr": "10.0.0.3", 00:19:08.880 "trsvcid": "4420", 00:19:08.880 "trtype": "TCP" 00:19:08.880 }, 00:19:08.880 "peer_address": { 00:19:08.880 "adrfam": "IPv4", 00:19:08.880 "traddr": "10.0.0.1", 00:19:08.880 "trsvcid": "40058", 00:19:08.880 "trtype": "TCP" 00:19:08.880 }, 00:19:08.880 "qid": 0, 00:19:08.880 "state": "enabled", 00:19:08.880 "thread": "nvmf_tgt_poll_group_000" 00:19:08.880 } 00:19:08.880 ]' 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.880 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.140 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.140 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.140 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.140 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.140 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.398 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:09.398 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.963 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.222 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:10.222 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.222 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.223 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.793 00:19:10.793 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.793 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.793 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.052 { 00:19:11.052 "auth": { 00:19:11.052 "dhgroup": "ffdhe6144", 00:19:11.052 "digest": "sha256", 00:19:11.052 "state": "completed" 00:19:11.052 }, 00:19:11.052 "cntlid": 33, 00:19:11.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:11.052 "listen_address": { 00:19:11.052 "adrfam": "IPv4", 00:19:11.052 "traddr": "10.0.0.3", 00:19:11.052 "trsvcid": "4420", 00:19:11.052 "trtype": "TCP" 00:19:11.052 }, 00:19:11.052 "peer_address": { 00:19:11.052 "adrfam": "IPv4", 00:19:11.052 "traddr": "10.0.0.1", 00:19:11.052 "trsvcid": "40076", 00:19:11.052 "trtype": "TCP" 00:19:11.052 }, 00:19:11.052 "qid": 0, 00:19:11.052 "state": "enabled", 00:19:11.052 "thread": "nvmf_tgt_poll_group_000" 00:19:11.052 } 00:19:11.052 ]' 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.052 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.311 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:11.311 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.878 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.879 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.137 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.138 18:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.705 00:19:12.705 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.705 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.705 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.963 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.963 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.963 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.964 { 00:19:12.964 "auth": { 00:19:12.964 "dhgroup": "ffdhe6144", 00:19:12.964 "digest": "sha256", 00:19:12.964 "state": "completed" 00:19:12.964 }, 00:19:12.964 "cntlid": 35, 00:19:12.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:12.964 "listen_address": { 00:19:12.964 "adrfam": "IPv4", 00:19:12.964 "traddr": "10.0.0.3", 00:19:12.964 "trsvcid": "4420", 00:19:12.964 "trtype": "TCP" 00:19:12.964 }, 00:19:12.964 "peer_address": { 00:19:12.964 "adrfam": "IPv4", 00:19:12.964 "traddr": "10.0.0.1", 00:19:12.964 "trsvcid": "40102", 00:19:12.964 "trtype": "TCP" 00:19:12.964 }, 00:19:12.964 "qid": 0, 00:19:12.964 "state": "enabled", 00:19:12.964 "thread": "nvmf_tgt_poll_group_000" 00:19:12.964 } 00:19:12.964 ]' 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.964 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.222 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.222 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.222 18:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.481 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:13.481 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.048 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.307 18:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.873 00:19:14.873 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.873 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.873 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.132 { 00:19:15.132 "auth": { 00:19:15.132 "dhgroup": "ffdhe6144", 00:19:15.132 "digest": "sha256", 00:19:15.132 "state": "completed" 00:19:15.132 }, 00:19:15.132 "cntlid": 37, 00:19:15.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:15.132 "listen_address": { 00:19:15.132 "adrfam": "IPv4", 00:19:15.132 "traddr": "10.0.0.3", 00:19:15.132 "trsvcid": "4420", 00:19:15.132 "trtype": "TCP" 00:19:15.132 }, 00:19:15.132 "peer_address": { 00:19:15.132 "adrfam": "IPv4", 00:19:15.132 "traddr": "10.0.0.1", 00:19:15.132 "trsvcid": "40122", 00:19:15.132 "trtype": "TCP" 00:19:15.132 }, 00:19:15.132 "qid": 0, 00:19:15.132 "state": "enabled", 00:19:15.132 "thread": "nvmf_tgt_poll_group_000" 00:19:15.132 } 00:19:15.132 ]' 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.132 18:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.699 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:15.699 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.267 18:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.525 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.092 00:19:17.092 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.092 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.092 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.351 { 00:19:17.351 "auth": { 00:19:17.351 "dhgroup": "ffdhe6144", 00:19:17.351 "digest": "sha256", 00:19:17.351 "state": "completed" 00:19:17.351 }, 00:19:17.351 "cntlid": 39, 00:19:17.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:17.351 "listen_address": { 00:19:17.351 "adrfam": "IPv4", 00:19:17.351 "traddr": "10.0.0.3", 00:19:17.351 "trsvcid": "4420", 00:19:17.351 "trtype": "TCP" 00:19:17.351 }, 00:19:17.351 "peer_address": { 00:19:17.351 "adrfam": "IPv4", 00:19:17.351 "traddr": "10.0.0.1", 00:19:17.351 "trsvcid": "49590", 00:19:17.351 "trtype": "TCP" 00:19:17.351 }, 00:19:17.351 "qid": 0, 00:19:17.351 "state": "enabled", 00:19:17.351 "thread": "nvmf_tgt_poll_group_000" 00:19:17.351 } 00:19:17.351 ]' 00:19:17.351 18:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.351 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.351 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.351 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.351 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.610 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.610 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.610 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.868 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:17.868 18:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.436 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.695 18:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.633 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.633 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.633 { 00:19:19.633 "auth": { 00:19:19.633 "dhgroup": "ffdhe8192", 00:19:19.633 "digest": "sha256", 00:19:19.633 "state": "completed" 00:19:19.633 }, 00:19:19.633 "cntlid": 41, 00:19:19.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:19.634 "listen_address": { 00:19:19.634 "adrfam": "IPv4", 00:19:19.634 "traddr": "10.0.0.3", 00:19:19.634 "trsvcid": "4420", 00:19:19.634 "trtype": "TCP" 00:19:19.634 }, 00:19:19.634 "peer_address": { 00:19:19.634 "adrfam": "IPv4", 00:19:19.634 "traddr": "10.0.0.1", 00:19:19.634 "trsvcid": "49630", 00:19:19.634 "trtype": "TCP" 00:19:19.634 }, 00:19:19.634 "qid": 0, 00:19:19.634 "state": "enabled", 00:19:19.634 "thread": "nvmf_tgt_poll_group_000" 00:19:19.634 } 00:19:19.634 ]' 00:19:19.634 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.892 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.151 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:20.151 18:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.718 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.285 18:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.852 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.852 { 00:19:21.852 "auth": { 00:19:21.852 "dhgroup": "ffdhe8192", 00:19:21.852 "digest": "sha256", 00:19:21.852 "state": "completed" 00:19:21.852 }, 00:19:21.852 "cntlid": 43, 00:19:21.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:21.852 "listen_address": { 00:19:21.852 "adrfam": "IPv4", 00:19:21.852 "traddr": "10.0.0.3", 00:19:21.852 "trsvcid": "4420", 00:19:21.852 "trtype": "TCP" 00:19:21.852 }, 00:19:21.852 "peer_address": { 00:19:21.852 "adrfam": "IPv4", 00:19:21.852 "traddr": "10.0.0.1", 00:19:21.852 "trsvcid": "49654", 00:19:21.852 "trtype": "TCP" 00:19:21.852 }, 00:19:21.852 "qid": 0, 00:19:21.852 "state": "enabled", 00:19:21.852 "thread": "nvmf_tgt_poll_group_000" 00:19:21.852 } 00:19:21.852 ]' 00:19:21.852 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.111 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.369 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:22.369 18:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.936 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.195 18:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.130 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.130 { 00:19:24.130 "auth": { 00:19:24.130 "dhgroup": "ffdhe8192", 00:19:24.130 "digest": "sha256", 00:19:24.130 "state": "completed" 00:19:24.130 }, 00:19:24.130 "cntlid": 45, 00:19:24.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:24.130 "listen_address": { 00:19:24.130 "adrfam": "IPv4", 00:19:24.130 "traddr": "10.0.0.3", 00:19:24.130 "trsvcid": "4420", 00:19:24.130 "trtype": "TCP" 00:19:24.130 }, 00:19:24.130 "peer_address": { 00:19:24.130 "adrfam": "IPv4", 00:19:24.130 "traddr": "10.0.0.1", 00:19:24.130 "trsvcid": "49674", 00:19:24.130 "trtype": "TCP" 00:19:24.130 }, 00:19:24.130 "qid": 0, 00:19:24.130 "state": "enabled", 00:19:24.130 "thread": "nvmf_tgt_poll_group_000" 00:19:24.130 } 00:19:24.130 ]' 00:19:24.130 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.387 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.388 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.388 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.388 18:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.388 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.388 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.388 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.646 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:24.646 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:25.582 18:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.582 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.557 00:19:26.557 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.557 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.557 18:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.557 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.558 { 00:19:26.558 "auth": { 00:19:26.558 "dhgroup": "ffdhe8192", 00:19:26.558 "digest": "sha256", 00:19:26.558 "state": "completed" 00:19:26.558 }, 00:19:26.558 "cntlid": 47, 00:19:26.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:26.558 "listen_address": { 00:19:26.558 "adrfam": "IPv4", 00:19:26.558 "traddr": "10.0.0.3", 00:19:26.558 "trsvcid": "4420", 00:19:26.558 "trtype": "TCP" 00:19:26.558 }, 00:19:26.558 "peer_address": { 00:19:26.558 "adrfam": "IPv4", 00:19:26.558 "traddr": "10.0.0.1", 00:19:26.558 "trsvcid": "56586", 00:19:26.558 "trtype": "TCP" 00:19:26.558 }, 00:19:26.558 "qid": 0, 00:19:26.558 "state": "enabled", 00:19:26.558 "thread": "nvmf_tgt_poll_group_000" 00:19:26.558 } 00:19:26.558 ]' 00:19:26.558 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.815 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.815 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.816 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.816 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.816 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.816 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.816 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.074 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:27.074 18:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:27.640 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:27.898 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.899 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.157 00:19:28.414 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.414 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.414 18:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.672 { 00:19:28.672 "auth": { 00:19:28.672 "dhgroup": "null", 00:19:28.672 "digest": "sha384", 00:19:28.672 "state": "completed" 00:19:28.672 }, 00:19:28.672 "cntlid": 49, 00:19:28.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:28.672 "listen_address": { 00:19:28.672 "adrfam": "IPv4", 00:19:28.672 "traddr": "10.0.0.3", 00:19:28.672 "trsvcid": "4420", 00:19:28.672 "trtype": "TCP" 00:19:28.672 }, 00:19:28.672 "peer_address": { 00:19:28.672 "adrfam": "IPv4", 00:19:28.672 "traddr": "10.0.0.1", 00:19:28.672 "trsvcid": "56618", 00:19:28.672 "trtype": "TCP" 00:19:28.672 }, 00:19:28.672 "qid": 0, 00:19:28.672 "state": "enabled", 00:19:28.672 "thread": "nvmf_tgt_poll_group_000" 00:19:28.672 } 00:19:28.672 ]' 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.672 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.930 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:28.930 18:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.496 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.754 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.012 00:19:30.269 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.269 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.269 18:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.528 { 00:19:30.528 "auth": { 00:19:30.528 "dhgroup": "null", 00:19:30.528 "digest": "sha384", 00:19:30.528 "state": "completed" 00:19:30.528 }, 00:19:30.528 "cntlid": 51, 00:19:30.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:30.528 "listen_address": { 00:19:30.528 "adrfam": "IPv4", 00:19:30.528 "traddr": "10.0.0.3", 00:19:30.528 "trsvcid": "4420", 00:19:30.528 "trtype": "TCP" 00:19:30.528 }, 00:19:30.528 "peer_address": { 00:19:30.528 "adrfam": "IPv4", 00:19:30.528 "traddr": "10.0.0.1", 00:19:30.528 "trsvcid": "56642", 00:19:30.528 "trtype": "TCP" 00:19:30.528 }, 00:19:30.528 "qid": 0, 00:19:30.528 "state": "enabled", 00:19:30.528 "thread": "nvmf_tgt_poll_group_000" 00:19:30.528 } 00:19:30.528 ]' 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.528 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.786 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:30.786 18:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:31.721 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.721 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:31.721 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.721 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.722 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.287 00:19:32.287 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.287 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.287 18:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.544 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.545 { 00:19:32.545 "auth": { 00:19:32.545 "dhgroup": "null", 00:19:32.545 "digest": "sha384", 00:19:32.545 "state": "completed" 00:19:32.545 }, 00:19:32.545 "cntlid": 53, 00:19:32.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:32.545 "listen_address": { 00:19:32.545 "adrfam": "IPv4", 00:19:32.545 "traddr": "10.0.0.3", 00:19:32.545 "trsvcid": "4420", 00:19:32.545 "trtype": "TCP" 00:19:32.545 }, 00:19:32.545 "peer_address": { 00:19:32.545 "adrfam": "IPv4", 00:19:32.545 "traddr": "10.0.0.1", 00:19:32.545 "trsvcid": "56670", 00:19:32.545 "trtype": "TCP" 00:19:32.545 }, 00:19:32.545 "qid": 0, 00:19:32.545 "state": "enabled", 00:19:32.545 "thread": "nvmf_tgt_poll_group_000" 00:19:32.545 } 00:19:32.545 ]' 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.545 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.803 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:32.803 18:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.736 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.994 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.253 00:19:34.253 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.253 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.253 18:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.511 { 00:19:34.511 "auth": { 00:19:34.511 "dhgroup": "null", 00:19:34.511 "digest": "sha384", 00:19:34.511 "state": "completed" 00:19:34.511 }, 00:19:34.511 "cntlid": 55, 00:19:34.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:34.511 "listen_address": { 00:19:34.511 "adrfam": "IPv4", 00:19:34.511 "traddr": "10.0.0.3", 00:19:34.511 "trsvcid": "4420", 00:19:34.511 "trtype": "TCP" 00:19:34.511 }, 00:19:34.511 "peer_address": { 00:19:34.511 "adrfam": "IPv4", 00:19:34.511 "traddr": "10.0.0.1", 00:19:34.511 "trsvcid": "56704", 00:19:34.511 "trtype": "TCP" 00:19:34.511 }, 00:19:34.511 "qid": 0, 00:19:34.511 "state": "enabled", 00:19:34.511 "thread": "nvmf_tgt_poll_group_000" 00:19:34.511 } 00:19:34.511 ]' 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.511 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.769 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.769 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.769 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.769 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.769 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.027 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:35.027 18:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:35.593 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.159 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.416 00:19:36.416 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.416 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.416 18:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.674 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.674 { 00:19:36.674 "auth": { 00:19:36.674 "dhgroup": "ffdhe2048", 00:19:36.674 "digest": "sha384", 00:19:36.674 "state": "completed" 00:19:36.674 }, 00:19:36.674 "cntlid": 57, 00:19:36.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:36.674 "listen_address": { 00:19:36.674 "adrfam": "IPv4", 00:19:36.674 "traddr": "10.0.0.3", 00:19:36.675 "trsvcid": "4420", 00:19:36.675 "trtype": "TCP" 00:19:36.675 }, 00:19:36.675 "peer_address": { 00:19:36.675 "adrfam": "IPv4", 00:19:36.675 "traddr": "10.0.0.1", 00:19:36.675 "trsvcid": "50078", 00:19:36.675 "trtype": "TCP" 00:19:36.675 }, 00:19:36.675 "qid": 0, 00:19:36.675 "state": "enabled", 00:19:36.675 "thread": "nvmf_tgt_poll_group_000" 00:19:36.675 } 00:19:36.675 ]' 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.675 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.241 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:37.241 18:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.806 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.064 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.321 00:19:38.321 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.322 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.322 18:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.579 { 00:19:38.579 "auth": { 00:19:38.579 "dhgroup": "ffdhe2048", 00:19:38.579 "digest": "sha384", 00:19:38.579 "state": "completed" 00:19:38.579 }, 00:19:38.579 "cntlid": 59, 00:19:38.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:38.579 "listen_address": { 00:19:38.579 "adrfam": "IPv4", 00:19:38.579 "traddr": "10.0.0.3", 00:19:38.579 "trsvcid": "4420", 00:19:38.579 "trtype": "TCP" 00:19:38.579 }, 00:19:38.579 "peer_address": { 00:19:38.579 "adrfam": "IPv4", 00:19:38.579 "traddr": "10.0.0.1", 00:19:38.579 "trsvcid": "50110", 00:19:38.579 "trtype": "TCP" 00:19:38.579 }, 00:19:38.579 "qid": 0, 00:19:38.579 "state": "enabled", 00:19:38.579 "thread": "nvmf_tgt_poll_group_000" 00:19:38.579 } 00:19:38.579 ]' 00:19:38.579 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.837 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.095 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:39.095 18:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:39.660 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.660 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:39.660 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.660 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.918 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.918 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.918 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.918 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.176 18:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.434 00:19:40.691 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.691 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.691 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.949 { 00:19:40.949 "auth": { 00:19:40.949 "dhgroup": "ffdhe2048", 00:19:40.949 "digest": "sha384", 00:19:40.949 "state": "completed" 00:19:40.949 }, 00:19:40.949 "cntlid": 61, 00:19:40.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:40.949 "listen_address": { 00:19:40.949 "adrfam": "IPv4", 00:19:40.949 "traddr": "10.0.0.3", 00:19:40.949 "trsvcid": "4420", 00:19:40.949 "trtype": "TCP" 00:19:40.949 }, 00:19:40.949 "peer_address": { 00:19:40.949 "adrfam": "IPv4", 00:19:40.949 "traddr": "10.0.0.1", 00:19:40.949 "trsvcid": "50148", 00:19:40.949 "trtype": "TCP" 00:19:40.949 }, 00:19:40.949 "qid": 0, 00:19:40.949 "state": "enabled", 00:19:40.949 "thread": "nvmf_tgt_poll_group_000" 00:19:40.949 } 00:19:40.949 ]' 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.949 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.207 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:41.207 18:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.843 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.101 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.667 00:19:42.667 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.667 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.667 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.925 { 00:19:42.925 "auth": { 00:19:42.925 "dhgroup": "ffdhe2048", 00:19:42.925 "digest": "sha384", 00:19:42.925 "state": "completed" 00:19:42.925 }, 00:19:42.925 "cntlid": 63, 00:19:42.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:42.925 "listen_address": { 00:19:42.925 "adrfam": "IPv4", 00:19:42.925 "traddr": "10.0.0.3", 00:19:42.925 "trsvcid": "4420", 00:19:42.925 "trtype": "TCP" 00:19:42.925 }, 00:19:42.925 "peer_address": { 00:19:42.925 "adrfam": "IPv4", 00:19:42.925 "traddr": "10.0.0.1", 00:19:42.925 "trsvcid": "50170", 00:19:42.925 "trtype": "TCP" 00:19:42.925 }, 00:19:42.925 "qid": 0, 00:19:42.925 "state": "enabled", 00:19:42.925 "thread": "nvmf_tgt_poll_group_000" 00:19:42.925 } 00:19:42.925 ]' 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.925 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.491 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:43.491 18:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.057 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.317 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.575 00:19:44.575 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.575 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.575 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.833 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.833 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.833 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.833 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.091 { 00:19:45.091 "auth": { 00:19:45.091 "dhgroup": "ffdhe3072", 00:19:45.091 "digest": "sha384", 00:19:45.091 "state": "completed" 00:19:45.091 }, 00:19:45.091 "cntlid": 65, 00:19:45.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:45.091 "listen_address": { 00:19:45.091 "adrfam": "IPv4", 00:19:45.091 "traddr": "10.0.0.3", 00:19:45.091 "trsvcid": "4420", 00:19:45.091 "trtype": "TCP" 00:19:45.091 }, 00:19:45.091 "peer_address": { 00:19:45.091 "adrfam": "IPv4", 00:19:45.091 "traddr": "10.0.0.1", 00:19:45.091 "trsvcid": "50206", 00:19:45.091 "trtype": "TCP" 00:19:45.091 }, 00:19:45.091 "qid": 0, 00:19:45.091 "state": "enabled", 00:19:45.091 "thread": "nvmf_tgt_poll_group_000" 00:19:45.091 } 00:19:45.091 ]' 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.091 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.348 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:45.348 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.914 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.171 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.429 00:19:46.429 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.429 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.429 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.994 { 00:19:46.994 "auth": { 00:19:46.994 "dhgroup": "ffdhe3072", 00:19:46.994 "digest": "sha384", 00:19:46.994 "state": "completed" 00:19:46.994 }, 00:19:46.994 "cntlid": 67, 00:19:46.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:46.994 "listen_address": { 00:19:46.994 "adrfam": "IPv4", 00:19:46.994 "traddr": "10.0.0.3", 00:19:46.994 "trsvcid": "4420", 00:19:46.994 "trtype": "TCP" 00:19:46.994 }, 00:19:46.994 "peer_address": { 00:19:46.994 "adrfam": "IPv4", 00:19:46.994 "traddr": "10.0.0.1", 00:19:46.994 "trsvcid": "39988", 00:19:46.994 "trtype": "TCP" 00:19:46.994 }, 00:19:46.994 "qid": 0, 00:19:46.994 "state": "enabled", 00:19:46.994 "thread": "nvmf_tgt_poll_group_000" 00:19:46.994 } 00:19:46.994 ]' 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.994 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.251 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:47.251 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.816 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.381 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.638 00:19:48.638 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.638 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.638 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.896 { 00:19:48.896 "auth": { 00:19:48.896 "dhgroup": "ffdhe3072", 00:19:48.896 "digest": "sha384", 00:19:48.896 "state": "completed" 00:19:48.896 }, 00:19:48.896 "cntlid": 69, 00:19:48.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:48.896 "listen_address": { 00:19:48.896 "adrfam": "IPv4", 00:19:48.896 "traddr": "10.0.0.3", 00:19:48.896 "trsvcid": "4420", 00:19:48.896 "trtype": "TCP" 00:19:48.896 }, 00:19:48.896 "peer_address": { 00:19:48.896 "adrfam": "IPv4", 00:19:48.896 "traddr": "10.0.0.1", 00:19:48.896 "trsvcid": "40014", 00:19:48.896 "trtype": "TCP" 00:19:48.896 }, 00:19:48.896 "qid": 0, 00:19:48.896 "state": "enabled", 00:19:48.896 "thread": "nvmf_tgt_poll_group_000" 00:19:48.896 } 00:19:48.896 ]' 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.896 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.154 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.154 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.154 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.154 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.154 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.412 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:49.412 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.977 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.236 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.802 00:19:50.802 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.802 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.802 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.060 { 00:19:51.060 "auth": { 00:19:51.060 "dhgroup": "ffdhe3072", 00:19:51.060 "digest": "sha384", 00:19:51.060 "state": "completed" 00:19:51.060 }, 00:19:51.060 "cntlid": 71, 00:19:51.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:51.060 "listen_address": { 00:19:51.060 "adrfam": "IPv4", 00:19:51.060 "traddr": "10.0.0.3", 00:19:51.060 "trsvcid": "4420", 00:19:51.060 "trtype": "TCP" 00:19:51.060 }, 00:19:51.060 "peer_address": { 00:19:51.060 "adrfam": "IPv4", 00:19:51.060 "traddr": "10.0.0.1", 00:19:51.060 "trsvcid": "40034", 00:19:51.060 "trtype": "TCP" 00:19:51.060 }, 00:19:51.060 "qid": 0, 00:19:51.060 "state": "enabled", 00:19:51.060 "thread": "nvmf_tgt_poll_group_000" 00:19:51.060 } 00:19:51.060 ]' 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.060 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.625 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:51.625 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.190 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.448 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.013 00:19:53.013 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.013 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.013 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.271 { 00:19:53.271 "auth": { 00:19:53.271 "dhgroup": "ffdhe4096", 00:19:53.271 "digest": "sha384", 00:19:53.271 "state": "completed" 00:19:53.271 }, 00:19:53.271 "cntlid": 73, 00:19:53.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:53.271 "listen_address": { 00:19:53.271 "adrfam": "IPv4", 00:19:53.271 "traddr": "10.0.0.3", 00:19:53.271 "trsvcid": "4420", 00:19:53.271 "trtype": "TCP" 00:19:53.271 }, 00:19:53.271 "peer_address": { 00:19:53.271 "adrfam": "IPv4", 00:19:53.271 "traddr": "10.0.0.1", 00:19:53.271 "trsvcid": "40056", 00:19:53.271 "trtype": "TCP" 00:19:53.271 }, 00:19:53.271 "qid": 0, 00:19:53.271 "state": "enabled", 00:19:53.271 "thread": "nvmf_tgt_poll_group_000" 00:19:53.271 } 00:19:53.271 ]' 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.271 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.271 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.271 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.271 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.836 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:53.836 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.403 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.660 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.661 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.226 00:19:55.226 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.226 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.226 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.483 { 00:19:55.483 "auth": { 00:19:55.483 "dhgroup": "ffdhe4096", 00:19:55.483 "digest": "sha384", 00:19:55.483 "state": "completed" 00:19:55.483 }, 00:19:55.483 "cntlid": 75, 00:19:55.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:55.483 "listen_address": { 00:19:55.483 "adrfam": "IPv4", 00:19:55.483 "traddr": "10.0.0.3", 00:19:55.483 "trsvcid": "4420", 00:19:55.483 "trtype": "TCP" 00:19:55.483 }, 00:19:55.483 "peer_address": { 00:19:55.483 "adrfam": "IPv4", 00:19:55.483 "traddr": "10.0.0.1", 00:19:55.483 "trsvcid": "40082", 00:19:55.483 "trtype": "TCP" 00:19:55.483 }, 00:19:55.483 "qid": 0, 00:19:55.483 "state": "enabled", 00:19:55.483 "thread": "nvmf_tgt_poll_group_000" 00:19:55.483 } 00:19:55.483 ]' 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.483 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.741 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.741 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.741 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.998 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:55.998 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.565 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.825 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.392 00:19:57.392 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.392 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.392 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.651 { 00:19:57.651 "auth": { 00:19:57.651 "dhgroup": "ffdhe4096", 00:19:57.651 "digest": "sha384", 00:19:57.651 "state": "completed" 00:19:57.651 }, 00:19:57.651 "cntlid": 77, 00:19:57.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:57.651 "listen_address": { 00:19:57.651 "adrfam": "IPv4", 00:19:57.651 "traddr": "10.0.0.3", 00:19:57.651 "trsvcid": "4420", 00:19:57.651 "trtype": "TCP" 00:19:57.651 }, 00:19:57.651 "peer_address": { 00:19:57.651 "adrfam": "IPv4", 00:19:57.651 "traddr": "10.0.0.1", 00:19:57.651 "trsvcid": "54102", 00:19:57.651 "trtype": "TCP" 00:19:57.651 }, 00:19:57.651 "qid": 0, 00:19:57.651 "state": "enabled", 00:19:57.651 "thread": "nvmf_tgt_poll_group_000" 00:19:57.651 } 00:19:57.651 ]' 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.651 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.910 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:57.910 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:19:58.477 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.736 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.994 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.995 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.995 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.995 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.254 00:19:59.254 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.254 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.254 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.822 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.822 { 00:19:59.822 "auth": { 00:19:59.822 "dhgroup": "ffdhe4096", 00:19:59.822 "digest": "sha384", 00:19:59.822 "state": "completed" 00:19:59.823 }, 00:19:59.823 "cntlid": 79, 00:19:59.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:19:59.823 "listen_address": { 00:19:59.823 "adrfam": "IPv4", 00:19:59.823 "traddr": "10.0.0.3", 00:19:59.823 "trsvcid": "4420", 00:19:59.823 "trtype": "TCP" 00:19:59.823 }, 00:19:59.823 "peer_address": { 00:19:59.823 "adrfam": "IPv4", 00:19:59.823 "traddr": "10.0.0.1", 00:19:59.823 "trsvcid": "54134", 00:19:59.823 "trtype": "TCP" 00:19:59.823 }, 00:19:59.823 "qid": 0, 00:19:59.823 "state": "enabled", 00:19:59.823 "thread": "nvmf_tgt_poll_group_000" 00:19:59.823 } 00:19:59.823 ]' 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.823 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.081 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:00.081 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:00.648 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.907 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.473 00:20:01.473 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.473 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.473 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.732 { 00:20:01.732 "auth": { 00:20:01.732 "dhgroup": "ffdhe6144", 00:20:01.732 "digest": "sha384", 00:20:01.732 "state": "completed" 00:20:01.732 }, 00:20:01.732 "cntlid": 81, 00:20:01.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:01.732 "listen_address": { 00:20:01.732 "adrfam": "IPv4", 00:20:01.732 "traddr": "10.0.0.3", 00:20:01.732 "trsvcid": "4420", 00:20:01.732 "trtype": "TCP" 00:20:01.732 }, 00:20:01.732 "peer_address": { 00:20:01.732 "adrfam": "IPv4", 00:20:01.732 "traddr": "10.0.0.1", 00:20:01.732 "trsvcid": "54148", 00:20:01.732 "trtype": "TCP" 00:20:01.732 }, 00:20:01.732 "qid": 0, 00:20:01.732 "state": "enabled", 00:20:01.732 "thread": "nvmf_tgt_poll_group_000" 00:20:01.732 } 00:20:01.732 ]' 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.732 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.990 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.249 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:02.249 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:02.815 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.815 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:02.815 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.815 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.074 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.074 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.074 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.074 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.333 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.591 00:20:03.850 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.850 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.850 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.109 { 00:20:04.109 "auth": { 00:20:04.109 "dhgroup": "ffdhe6144", 00:20:04.109 "digest": "sha384", 00:20:04.109 "state": "completed" 00:20:04.109 }, 00:20:04.109 "cntlid": 83, 00:20:04.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:04.109 "listen_address": { 00:20:04.109 "adrfam": "IPv4", 00:20:04.109 "traddr": "10.0.0.3", 00:20:04.109 "trsvcid": "4420", 00:20:04.109 "trtype": "TCP" 00:20:04.109 }, 00:20:04.109 "peer_address": { 00:20:04.109 "adrfam": "IPv4", 00:20:04.109 "traddr": "10.0.0.1", 00:20:04.109 "trsvcid": "54180", 00:20:04.109 "trtype": "TCP" 00:20:04.109 }, 00:20:04.109 "qid": 0, 00:20:04.109 "state": "enabled", 00:20:04.109 "thread": "nvmf_tgt_poll_group_000" 00:20:04.109 } 00:20:04.109 ]' 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.109 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.367 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:04.367 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:04.933 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.934 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.191 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.758 00:20:05.758 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.758 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.758 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.016 { 00:20:06.016 "auth": { 00:20:06.016 "dhgroup": "ffdhe6144", 00:20:06.016 "digest": "sha384", 00:20:06.016 "state": "completed" 00:20:06.016 }, 00:20:06.016 "cntlid": 85, 00:20:06.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:06.016 "listen_address": { 00:20:06.016 "adrfam": "IPv4", 00:20:06.016 "traddr": "10.0.0.3", 00:20:06.016 "trsvcid": "4420", 00:20:06.016 "trtype": "TCP" 00:20:06.016 }, 00:20:06.016 "peer_address": { 00:20:06.016 "adrfam": "IPv4", 00:20:06.016 "traddr": "10.0.0.1", 00:20:06.016 "trsvcid": "54198", 00:20:06.016 "trtype": "TCP" 00:20:06.016 }, 00:20:06.016 "qid": 0, 00:20:06.016 "state": "enabled", 00:20:06.016 "thread": "nvmf_tgt_poll_group_000" 00:20:06.016 } 00:20:06.016 ]' 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.016 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.017 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.017 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.017 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.276 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.276 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.276 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.534 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:06.534 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.102 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.360 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.619 00:20:07.877 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.877 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.877 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.877 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.878 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.878 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.878 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.136 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.136 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.136 { 00:20:08.136 "auth": { 00:20:08.136 "dhgroup": "ffdhe6144", 00:20:08.136 "digest": "sha384", 00:20:08.136 "state": "completed" 00:20:08.136 }, 00:20:08.136 "cntlid": 87, 00:20:08.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:08.136 "listen_address": { 00:20:08.136 "adrfam": "IPv4", 00:20:08.136 "traddr": "10.0.0.3", 00:20:08.136 "trsvcid": "4420", 00:20:08.136 "trtype": "TCP" 00:20:08.136 }, 00:20:08.136 "peer_address": { 00:20:08.137 "adrfam": "IPv4", 00:20:08.137 "traddr": "10.0.0.1", 00:20:08.137 "trsvcid": "54000", 00:20:08.137 "trtype": "TCP" 00:20:08.137 }, 00:20:08.137 "qid": 0, 00:20:08.137 "state": "enabled", 00:20:08.137 "thread": "nvmf_tgt_poll_group_000" 00:20:08.137 } 00:20:08.137 ]' 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.137 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.416 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:08.416 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.002 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.261 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.833 00:20:09.833 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.833 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.833 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.091 { 00:20:10.091 "auth": { 00:20:10.091 "dhgroup": "ffdhe8192", 00:20:10.091 "digest": "sha384", 00:20:10.091 "state": "completed" 00:20:10.091 }, 00:20:10.091 "cntlid": 89, 00:20:10.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:10.091 "listen_address": { 00:20:10.091 "adrfam": "IPv4", 00:20:10.091 "traddr": "10.0.0.3", 00:20:10.091 "trsvcid": "4420", 00:20:10.091 "trtype": "TCP" 00:20:10.091 }, 00:20:10.091 "peer_address": { 00:20:10.091 "adrfam": "IPv4", 00:20:10.091 "traddr": "10.0.0.1", 00:20:10.091 "trsvcid": "54024", 00:20:10.091 "trtype": "TCP" 00:20:10.091 }, 00:20:10.091 "qid": 0, 00:20:10.091 "state": "enabled", 00:20:10.091 "thread": "nvmf_tgt_poll_group_000" 00:20:10.091 } 00:20:10.091 ]' 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.091 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.350 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.350 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.350 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.350 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.350 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.609 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:10.609 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.176 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.435 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.002 00:20:12.002 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.002 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.002 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.261 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.261 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.261 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.261 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.261 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.261 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.261 { 00:20:12.261 "auth": { 00:20:12.261 "dhgroup": "ffdhe8192", 00:20:12.261 "digest": "sha384", 00:20:12.261 "state": "completed" 00:20:12.261 }, 00:20:12.261 "cntlid": 91, 00:20:12.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:12.261 "listen_address": { 00:20:12.261 "adrfam": "IPv4", 00:20:12.261 "traddr": "10.0.0.3", 00:20:12.261 "trsvcid": "4420", 00:20:12.261 "trtype": "TCP" 00:20:12.261 }, 00:20:12.261 "peer_address": { 00:20:12.261 "adrfam": "IPv4", 00:20:12.261 "traddr": "10.0.0.1", 00:20:12.261 "trsvcid": "54056", 00:20:12.261 "trtype": "TCP" 00:20:12.261 }, 00:20:12.261 "qid": 0, 00:20:12.261 "state": "enabled", 00:20:12.261 "thread": "nvmf_tgt_poll_group_000" 00:20:12.261 } 00:20:12.261 ]' 00:20:12.261 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.519 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.778 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:12.778 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.346 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.605 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.172 00:20:14.172 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.172 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.172 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.738 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.738 { 00:20:14.738 "auth": { 00:20:14.738 "dhgroup": "ffdhe8192", 00:20:14.738 "digest": "sha384", 00:20:14.738 "state": "completed" 00:20:14.738 }, 00:20:14.738 "cntlid": 93, 00:20:14.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:14.739 "listen_address": { 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.3", 00:20:14.739 "trsvcid": "4420", 00:20:14.739 "trtype": "TCP" 00:20:14.739 }, 00:20:14.739 "peer_address": { 00:20:14.739 "adrfam": "IPv4", 00:20:14.739 "traddr": "10.0.0.1", 00:20:14.739 "trsvcid": "54086", 00:20:14.739 "trtype": "TCP" 00:20:14.739 }, 00:20:14.739 "qid": 0, 00:20:14.739 "state": "enabled", 00:20:14.739 "thread": "nvmf_tgt_poll_group_000" 00:20:14.739 } 00:20:14.739 ]' 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.739 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.997 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:14.997 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.564 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.823 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.758 00:20:16.758 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.758 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.758 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.017 { 00:20:17.017 "auth": { 00:20:17.017 "dhgroup": "ffdhe8192", 00:20:17.017 "digest": "sha384", 00:20:17.017 "state": "completed" 00:20:17.017 }, 00:20:17.017 "cntlid": 95, 00:20:17.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:17.017 "listen_address": { 00:20:17.017 "adrfam": "IPv4", 00:20:17.017 "traddr": "10.0.0.3", 00:20:17.017 "trsvcid": "4420", 00:20:17.017 "trtype": "TCP" 00:20:17.017 }, 00:20:17.017 "peer_address": { 00:20:17.017 "adrfam": "IPv4", 00:20:17.017 "traddr": "10.0.0.1", 00:20:17.017 "trsvcid": "51406", 00:20:17.017 "trtype": "TCP" 00:20:17.017 }, 00:20:17.017 "qid": 0, 00:20:17.017 "state": "enabled", 00:20:17.017 "thread": "nvmf_tgt_poll_group_000" 00:20:17.017 } 00:20:17.017 ]' 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.017 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.275 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.275 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.275 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.534 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:17.534 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.121 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.391 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.650 00:20:18.650 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.650 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.650 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.909 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.168 { 00:20:19.168 "auth": { 00:20:19.168 "dhgroup": "null", 00:20:19.168 "digest": "sha512", 00:20:19.168 "state": "completed" 00:20:19.168 }, 00:20:19.168 "cntlid": 97, 00:20:19.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:19.168 "listen_address": { 00:20:19.168 "adrfam": "IPv4", 00:20:19.168 "traddr": "10.0.0.3", 00:20:19.168 "trsvcid": "4420", 00:20:19.168 "trtype": "TCP" 00:20:19.168 }, 00:20:19.168 "peer_address": { 00:20:19.168 "adrfam": "IPv4", 00:20:19.168 "traddr": "10.0.0.1", 00:20:19.168 "trsvcid": "51444", 00:20:19.168 "trtype": "TCP" 00:20:19.168 }, 00:20:19.168 "qid": 0, 00:20:19.168 "state": "enabled", 00:20:19.168 "thread": "nvmf_tgt_poll_group_000" 00:20:19.168 } 00:20:19.168 ]' 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.168 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.742 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:19.742 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.309 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.569 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.136 00:20:21.136 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.136 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.136 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.395 { 00:20:21.395 "auth": { 00:20:21.395 "dhgroup": "null", 00:20:21.395 "digest": "sha512", 00:20:21.395 "state": "completed" 00:20:21.395 }, 00:20:21.395 "cntlid": 99, 00:20:21.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:21.395 "listen_address": { 00:20:21.395 "adrfam": "IPv4", 00:20:21.395 "traddr": "10.0.0.3", 00:20:21.395 "trsvcid": "4420", 00:20:21.395 "trtype": "TCP" 00:20:21.395 }, 00:20:21.395 "peer_address": { 00:20:21.395 "adrfam": "IPv4", 00:20:21.395 "traddr": "10.0.0.1", 00:20:21.395 "trsvcid": "51464", 00:20:21.395 "trtype": "TCP" 00:20:21.395 }, 00:20:21.395 "qid": 0, 00:20:21.395 "state": "enabled", 00:20:21.395 "thread": "nvmf_tgt_poll_group_000" 00:20:21.395 } 00:20:21.395 ]' 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.395 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.654 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.654 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.654 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.913 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:21.913 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:22.848 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.848 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:22.848 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.849 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.849 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.849 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.849 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.849 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.108 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.366 00:20:23.366 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.366 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.366 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.625 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.625 { 00:20:23.625 "auth": { 00:20:23.625 "dhgroup": "null", 00:20:23.625 "digest": "sha512", 00:20:23.625 "state": "completed" 00:20:23.625 }, 00:20:23.625 "cntlid": 101, 00:20:23.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:23.625 "listen_address": { 00:20:23.625 "adrfam": "IPv4", 00:20:23.625 "traddr": "10.0.0.3", 00:20:23.625 "trsvcid": "4420", 00:20:23.625 "trtype": "TCP" 00:20:23.625 }, 00:20:23.625 "peer_address": { 00:20:23.625 "adrfam": "IPv4", 00:20:23.625 "traddr": "10.0.0.1", 00:20:23.625 "trsvcid": "51478", 00:20:23.625 "trtype": "TCP" 00:20:23.625 }, 00:20:23.625 "qid": 0, 00:20:23.625 "state": "enabled", 00:20:23.625 "thread": "nvmf_tgt_poll_group_000" 00:20:23.625 } 00:20:23.625 ]' 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.885 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.144 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:24.144 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.079 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.338 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.597 00:20:25.597 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.597 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.597 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.855 { 00:20:25.855 "auth": { 00:20:25.855 "dhgroup": "null", 00:20:25.855 "digest": "sha512", 00:20:25.855 "state": "completed" 00:20:25.855 }, 00:20:25.855 "cntlid": 103, 00:20:25.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:25.855 "listen_address": { 00:20:25.855 "adrfam": "IPv4", 00:20:25.855 "traddr": "10.0.0.3", 00:20:25.855 "trsvcid": "4420", 00:20:25.855 "trtype": "TCP" 00:20:25.855 }, 00:20:25.855 "peer_address": { 00:20:25.855 "adrfam": "IPv4", 00:20:25.855 "traddr": "10.0.0.1", 00:20:25.855 "trsvcid": "51508", 00:20:25.855 "trtype": "TCP" 00:20:25.855 }, 00:20:25.855 "qid": 0, 00:20:25.855 "state": "enabled", 00:20:25.855 "thread": "nvmf_tgt_poll_group_000" 00:20:25.855 } 00:20:25.855 ]' 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.855 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.114 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.114 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.114 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.114 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.114 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.373 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:26.373 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:27.309 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.310 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.569 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.828 00:20:27.828 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.828 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.828 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.396 { 00:20:28.396 "auth": { 00:20:28.396 "dhgroup": "ffdhe2048", 00:20:28.396 "digest": "sha512", 00:20:28.396 "state": "completed" 00:20:28.396 }, 00:20:28.396 "cntlid": 105, 00:20:28.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:28.396 "listen_address": { 00:20:28.396 "adrfam": "IPv4", 00:20:28.396 "traddr": "10.0.0.3", 00:20:28.396 "trsvcid": "4420", 00:20:28.396 "trtype": "TCP" 00:20:28.396 }, 00:20:28.396 "peer_address": { 00:20:28.396 "adrfam": "IPv4", 00:20:28.396 "traddr": "10.0.0.1", 00:20:28.396 "trsvcid": "47356", 00:20:28.396 "trtype": "TCP" 00:20:28.396 }, 00:20:28.396 "qid": 0, 00:20:28.396 "state": "enabled", 00:20:28.396 "thread": "nvmf_tgt_poll_group_000" 00:20:28.396 } 00:20:28.396 ]' 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.396 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.396 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.396 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.396 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.396 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.396 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.655 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:28.655 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.591 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.849 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.108 00:20:30.108 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.108 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.108 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.676 { 00:20:30.676 "auth": { 00:20:30.676 "dhgroup": "ffdhe2048", 00:20:30.676 "digest": "sha512", 00:20:30.676 "state": "completed" 00:20:30.676 }, 00:20:30.676 "cntlid": 107, 00:20:30.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:30.676 "listen_address": { 00:20:30.676 "adrfam": "IPv4", 00:20:30.676 "traddr": "10.0.0.3", 00:20:30.676 "trsvcid": "4420", 00:20:30.676 "trtype": "TCP" 00:20:30.676 }, 00:20:30.676 "peer_address": { 00:20:30.676 "adrfam": "IPv4", 00:20:30.676 "traddr": "10.0.0.1", 00:20:30.676 "trsvcid": "47382", 00:20:30.676 "trtype": "TCP" 00:20:30.676 }, 00:20:30.676 "qid": 0, 00:20:30.676 "state": "enabled", 00:20:30.676 "thread": "nvmf_tgt_poll_group_000" 00:20:30.676 } 00:20:30.676 ]' 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.676 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.935 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:30.935 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.918 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.454 00:20:32.454 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.454 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.454 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.713 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.714 { 00:20:32.714 "auth": { 00:20:32.714 "dhgroup": "ffdhe2048", 00:20:32.714 "digest": "sha512", 00:20:32.714 "state": "completed" 00:20:32.714 }, 00:20:32.714 "cntlid": 109, 00:20:32.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:32.714 "listen_address": { 00:20:32.714 "adrfam": "IPv4", 00:20:32.714 "traddr": "10.0.0.3", 00:20:32.714 "trsvcid": "4420", 00:20:32.714 "trtype": "TCP" 00:20:32.714 }, 00:20:32.714 "peer_address": { 00:20:32.714 "adrfam": "IPv4", 00:20:32.714 "traddr": "10.0.0.1", 00:20:32.714 "trsvcid": "47412", 00:20:32.714 "trtype": "TCP" 00:20:32.714 }, 00:20:32.714 "qid": 0, 00:20:32.714 "state": "enabled", 00:20:32.714 "thread": "nvmf_tgt_poll_group_000" 00:20:32.714 } 00:20:32.714 ]' 00:20:32.714 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.972 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.231 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:33.231 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.168 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.426 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.685 00:20:34.685 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.685 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.685 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.944 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.944 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.944 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.944 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.203 { 00:20:35.203 "auth": { 00:20:35.203 "dhgroup": "ffdhe2048", 00:20:35.203 "digest": "sha512", 00:20:35.203 "state": "completed" 00:20:35.203 }, 00:20:35.203 "cntlid": 111, 00:20:35.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:35.203 "listen_address": { 00:20:35.203 "adrfam": "IPv4", 00:20:35.203 "traddr": "10.0.0.3", 00:20:35.203 "trsvcid": "4420", 00:20:35.203 "trtype": "TCP" 00:20:35.203 }, 00:20:35.203 "peer_address": { 00:20:35.203 "adrfam": "IPv4", 00:20:35.203 "traddr": "10.0.0.1", 00:20:35.203 "trsvcid": "47442", 00:20:35.203 "trtype": "TCP" 00:20:35.203 }, 00:20:35.203 "qid": 0, 00:20:35.203 "state": "enabled", 00:20:35.203 "thread": "nvmf_tgt_poll_group_000" 00:20:35.203 } 00:20:35.203 ]' 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.203 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.461 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:35.461 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.396 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.396 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.654 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.913 00:20:36.913 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.913 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.913 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.171 { 00:20:37.171 "auth": { 00:20:37.171 "dhgroup": "ffdhe3072", 00:20:37.171 "digest": "sha512", 00:20:37.171 "state": "completed" 00:20:37.171 }, 00:20:37.171 "cntlid": 113, 00:20:37.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:37.171 "listen_address": { 00:20:37.171 "adrfam": "IPv4", 00:20:37.171 "traddr": "10.0.0.3", 00:20:37.171 "trsvcid": "4420", 00:20:37.171 "trtype": "TCP" 00:20:37.171 }, 00:20:37.171 "peer_address": { 00:20:37.171 "adrfam": "IPv4", 00:20:37.171 "traddr": "10.0.0.1", 00:20:37.171 "trsvcid": "40270", 00:20:37.171 "trtype": "TCP" 00:20:37.171 }, 00:20:37.171 "qid": 0, 00:20:37.171 "state": "enabled", 00:20:37.171 "thread": "nvmf_tgt_poll_group_000" 00:20:37.171 } 00:20:37.171 ]' 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.171 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.172 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.172 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.430 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.430 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.431 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.431 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:37.431 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.997 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.824 00:20:38.824 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.824 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.824 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.083 { 00:20:39.083 "auth": { 00:20:39.083 "dhgroup": "ffdhe3072", 00:20:39.083 "digest": "sha512", 00:20:39.083 "state": "completed" 00:20:39.083 }, 00:20:39.083 "cntlid": 115, 00:20:39.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:39.083 "listen_address": { 00:20:39.083 "adrfam": "IPv4", 00:20:39.083 "traddr": "10.0.0.3", 00:20:39.083 "trsvcid": "4420", 00:20:39.083 "trtype": "TCP" 00:20:39.083 }, 00:20:39.083 "peer_address": { 00:20:39.083 "adrfam": "IPv4", 00:20:39.083 "traddr": "10.0.0.1", 00:20:39.083 "trsvcid": "40292", 00:20:39.083 "trtype": "TCP" 00:20:39.083 }, 00:20:39.083 "qid": 0, 00:20:39.083 "state": "enabled", 00:20:39.083 "thread": "nvmf_tgt_poll_group_000" 00:20:39.083 } 00:20:39.083 ]' 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.083 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.342 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:39.342 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.907 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.166 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.733 00:20:40.733 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.733 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.733 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.992 { 00:20:40.992 "auth": { 00:20:40.992 "dhgroup": "ffdhe3072", 00:20:40.992 "digest": "sha512", 00:20:40.992 "state": "completed" 00:20:40.992 }, 00:20:40.992 "cntlid": 117, 00:20:40.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:40.992 "listen_address": { 00:20:40.992 "adrfam": "IPv4", 00:20:40.992 "traddr": "10.0.0.3", 00:20:40.992 "trsvcid": "4420", 00:20:40.992 "trtype": "TCP" 00:20:40.992 }, 00:20:40.992 "peer_address": { 00:20:40.992 "adrfam": "IPv4", 00:20:40.992 "traddr": "10.0.0.1", 00:20:40.992 "trsvcid": "40328", 00:20:40.992 "trtype": "TCP" 00:20:40.992 }, 00:20:40.992 "qid": 0, 00:20:40.992 "state": "enabled", 00:20:40.992 "thread": "nvmf_tgt_poll_group_000" 00:20:40.992 } 00:20:40.992 ]' 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.992 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.252 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:41.252 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.819 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.078 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.644 00:20:42.644 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.644 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.644 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.903 { 00:20:42.903 "auth": { 00:20:42.903 "dhgroup": "ffdhe3072", 00:20:42.903 "digest": "sha512", 00:20:42.903 "state": "completed" 00:20:42.903 }, 00:20:42.903 "cntlid": 119, 00:20:42.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:42.903 "listen_address": { 00:20:42.903 "adrfam": "IPv4", 00:20:42.903 "traddr": "10.0.0.3", 00:20:42.903 "trsvcid": "4420", 00:20:42.903 "trtype": "TCP" 00:20:42.903 }, 00:20:42.903 "peer_address": { 00:20:42.903 "adrfam": "IPv4", 00:20:42.903 "traddr": "10.0.0.1", 00:20:42.903 "trsvcid": "40358", 00:20:42.903 "trtype": "TCP" 00:20:42.903 }, 00:20:42.903 "qid": 0, 00:20:42.903 "state": "enabled", 00:20:42.903 "thread": "nvmf_tgt_poll_group_000" 00:20:42.903 } 00:20:42.903 ]' 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.903 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.161 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:43.161 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.758 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.016 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.583 00:20:44.583 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.583 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.583 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.842 { 00:20:44.842 "auth": { 00:20:44.842 "dhgroup": "ffdhe4096", 00:20:44.842 "digest": "sha512", 00:20:44.842 "state": "completed" 00:20:44.842 }, 00:20:44.842 "cntlid": 121, 00:20:44.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:44.842 "listen_address": { 00:20:44.842 "adrfam": "IPv4", 00:20:44.842 "traddr": "10.0.0.3", 00:20:44.842 "trsvcid": "4420", 00:20:44.842 "trtype": "TCP" 00:20:44.842 }, 00:20:44.842 "peer_address": { 00:20:44.842 "adrfam": "IPv4", 00:20:44.842 "traddr": "10.0.0.1", 00:20:44.842 "trsvcid": "40380", 00:20:44.842 "trtype": "TCP" 00:20:44.842 }, 00:20:44.842 "qid": 0, 00:20:44.842 "state": "enabled", 00:20:44.842 "thread": "nvmf_tgt_poll_group_000" 00:20:44.842 } 00:20:44.842 ]' 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.842 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.410 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:45.410 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:45.670 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.928 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.186 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.445 00:20:46.445 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.445 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.445 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.704 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.963 { 00:20:46.963 "auth": { 00:20:46.963 "dhgroup": "ffdhe4096", 00:20:46.963 "digest": "sha512", 00:20:46.963 "state": "completed" 00:20:46.963 }, 00:20:46.963 "cntlid": 123, 00:20:46.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:46.963 "listen_address": { 00:20:46.963 "adrfam": "IPv4", 00:20:46.963 "traddr": "10.0.0.3", 00:20:46.963 "trsvcid": "4420", 00:20:46.963 "trtype": "TCP" 00:20:46.963 }, 00:20:46.963 "peer_address": { 00:20:46.963 "adrfam": "IPv4", 00:20:46.963 "traddr": "10.0.0.1", 00:20:46.963 "trsvcid": "34412", 00:20:46.963 "trtype": "TCP" 00:20:46.963 }, 00:20:46.963 "qid": 0, 00:20:46.963 "state": "enabled", 00:20:46.963 "thread": "nvmf_tgt_poll_group_000" 00:20:46.963 } 00:20:46.963 ]' 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.963 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.222 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:47.222 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.789 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.048 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.307 00:20:48.307 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.307 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.307 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.566 { 00:20:48.566 "auth": { 00:20:48.566 "dhgroup": "ffdhe4096", 00:20:48.566 "digest": "sha512", 00:20:48.566 "state": "completed" 00:20:48.566 }, 00:20:48.566 "cntlid": 125, 00:20:48.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:48.566 "listen_address": { 00:20:48.566 "adrfam": "IPv4", 00:20:48.566 "traddr": "10.0.0.3", 00:20:48.566 "trsvcid": "4420", 00:20:48.566 "trtype": "TCP" 00:20:48.566 }, 00:20:48.566 "peer_address": { 00:20:48.566 "adrfam": "IPv4", 00:20:48.566 "traddr": "10.0.0.1", 00:20:48.566 "trsvcid": "34444", 00:20:48.566 "trtype": "TCP" 00:20:48.566 }, 00:20:48.566 "qid": 0, 00:20:48.566 "state": "enabled", 00:20:48.566 "thread": "nvmf_tgt_poll_group_000" 00:20:48.566 } 00:20:48.566 ]' 00:20:48.566 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.824 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.082 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:49.082 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.018 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.585 00:20:50.585 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.585 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.585 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.844 { 00:20:50.844 "auth": { 00:20:50.844 "dhgroup": "ffdhe4096", 00:20:50.844 "digest": "sha512", 00:20:50.844 "state": "completed" 00:20:50.844 }, 00:20:50.844 "cntlid": 127, 00:20:50.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:50.844 "listen_address": { 00:20:50.844 "adrfam": "IPv4", 00:20:50.844 "traddr": "10.0.0.3", 00:20:50.844 "trsvcid": "4420", 00:20:50.844 "trtype": "TCP" 00:20:50.844 }, 00:20:50.844 "peer_address": { 00:20:50.844 "adrfam": "IPv4", 00:20:50.844 "traddr": "10.0.0.1", 00:20:50.844 "trsvcid": "34474", 00:20:50.844 "trtype": "TCP" 00:20:50.844 }, 00:20:50.844 "qid": 0, 00:20:50.844 "state": "enabled", 00:20:50.844 "thread": "nvmf_tgt_poll_group_000" 00:20:50.844 } 00:20:50.844 ]' 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.844 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.410 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:51.410 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:51.668 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.668 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:51.668 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.668 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.927 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.186 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.186 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.186 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.186 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.444 00:20:52.444 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.444 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.444 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.703 { 00:20:52.703 "auth": { 00:20:52.703 "dhgroup": "ffdhe6144", 00:20:52.703 "digest": "sha512", 00:20:52.703 "state": "completed" 00:20:52.703 }, 00:20:52.703 "cntlid": 129, 00:20:52.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:52.703 "listen_address": { 00:20:52.703 "adrfam": "IPv4", 00:20:52.703 "traddr": "10.0.0.3", 00:20:52.703 "trsvcid": "4420", 00:20:52.703 "trtype": "TCP" 00:20:52.703 }, 00:20:52.703 "peer_address": { 00:20:52.703 "adrfam": "IPv4", 00:20:52.703 "traddr": "10.0.0.1", 00:20:52.703 "trsvcid": "34488", 00:20:52.703 "trtype": "TCP" 00:20:52.703 }, 00:20:52.703 "qid": 0, 00:20:52.703 "state": "enabled", 00:20:52.703 "thread": "nvmf_tgt_poll_group_000" 00:20:52.703 } 00:20:52.703 ]' 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.703 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.962 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.962 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.962 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.230 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:53.230 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:20:53.845 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:53.846 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.103 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.104 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.671 00:20:54.671 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.671 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.671 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.929 { 00:20:54.929 "auth": { 00:20:54.929 "dhgroup": "ffdhe6144", 00:20:54.929 "digest": "sha512", 00:20:54.929 "state": "completed" 00:20:54.929 }, 00:20:54.929 "cntlid": 131, 00:20:54.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:54.929 "listen_address": { 00:20:54.929 "adrfam": "IPv4", 00:20:54.929 "traddr": "10.0.0.3", 00:20:54.929 "trsvcid": "4420", 00:20:54.929 "trtype": "TCP" 00:20:54.929 }, 00:20:54.929 "peer_address": { 00:20:54.929 "adrfam": "IPv4", 00:20:54.929 "traddr": "10.0.0.1", 00:20:54.929 "trsvcid": "34516", 00:20:54.929 "trtype": "TCP" 00:20:54.929 }, 00:20:54.929 "qid": 0, 00:20:54.929 "state": "enabled", 00:20:54.929 "thread": "nvmf_tgt_poll_group_000" 00:20:54.929 } 00:20:54.929 ]' 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.929 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.188 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:55.188 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:55.754 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.320 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.579 00:20:56.579 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.579 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.579 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.837 { 00:20:56.837 "auth": { 00:20:56.837 "dhgroup": "ffdhe6144", 00:20:56.837 "digest": "sha512", 00:20:56.837 "state": "completed" 00:20:56.837 }, 00:20:56.837 "cntlid": 133, 00:20:56.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:56.837 "listen_address": { 00:20:56.837 "adrfam": "IPv4", 00:20:56.837 "traddr": "10.0.0.3", 00:20:56.837 "trsvcid": "4420", 00:20:56.837 "trtype": "TCP" 00:20:56.837 }, 00:20:56.837 "peer_address": { 00:20:56.837 "adrfam": "IPv4", 00:20:56.837 "traddr": "10.0.0.1", 00:20:56.837 "trsvcid": "43886", 00:20:56.837 "trtype": "TCP" 00:20:56.837 }, 00:20:56.837 "qid": 0, 00:20:56.837 "state": "enabled", 00:20:56.837 "thread": "nvmf_tgt_poll_group_000" 00:20:56.837 } 00:20:56.837 ]' 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.837 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.096 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.096 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.096 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.096 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.096 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.354 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:57.354 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.920 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.179 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.745 00:20:58.746 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.746 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.746 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.004 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.004 { 00:20:59.004 "auth": { 00:20:59.004 "dhgroup": "ffdhe6144", 00:20:59.004 "digest": "sha512", 00:20:59.004 "state": "completed" 00:20:59.004 }, 00:20:59.004 "cntlid": 135, 00:20:59.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:20:59.004 "listen_address": { 00:20:59.004 "adrfam": "IPv4", 00:20:59.004 "traddr": "10.0.0.3", 00:20:59.004 "trsvcid": "4420", 00:20:59.004 "trtype": "TCP" 00:20:59.004 }, 00:20:59.004 "peer_address": { 00:20:59.004 "adrfam": "IPv4", 00:20:59.004 "traddr": "10.0.0.1", 00:20:59.004 "trsvcid": "43920", 00:20:59.004 "trtype": "TCP" 00:20:59.004 }, 00:20:59.004 "qid": 0, 00:20:59.005 "state": "enabled", 00:20:59.005 "thread": "nvmf_tgt_poll_group_000" 00:20:59.005 } 00:20:59.005 ]' 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.005 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.263 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:20:59.263 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.196 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.197 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.763 00:21:00.763 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.763 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.763 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.022 { 00:21:01.022 "auth": { 00:21:01.022 "dhgroup": "ffdhe8192", 00:21:01.022 "digest": "sha512", 00:21:01.022 "state": "completed" 00:21:01.022 }, 00:21:01.022 "cntlid": 137, 00:21:01.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:01.022 "listen_address": { 00:21:01.022 "adrfam": "IPv4", 00:21:01.022 "traddr": "10.0.0.3", 00:21:01.022 "trsvcid": "4420", 00:21:01.022 "trtype": "TCP" 00:21:01.022 }, 00:21:01.022 "peer_address": { 00:21:01.022 "adrfam": "IPv4", 00:21:01.022 "traddr": "10.0.0.1", 00:21:01.022 "trsvcid": "43952", 00:21:01.022 "trtype": "TCP" 00:21:01.022 }, 00:21:01.022 "qid": 0, 00:21:01.022 "state": "enabled", 00:21:01.022 "thread": "nvmf_tgt_poll_group_000" 00:21:01.022 } 00:21:01.022 ]' 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.022 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.281 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.281 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.281 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.540 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:21:01.540 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.108 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.366 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:02.366 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.366 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.366 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.934 00:21:02.934 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.934 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.934 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.193 { 00:21:03.193 "auth": { 00:21:03.193 "dhgroup": "ffdhe8192", 00:21:03.193 "digest": "sha512", 00:21:03.193 "state": "completed" 00:21:03.193 }, 00:21:03.193 "cntlid": 139, 00:21:03.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:03.193 "listen_address": { 00:21:03.193 "adrfam": "IPv4", 00:21:03.193 "traddr": "10.0.0.3", 00:21:03.193 "trsvcid": "4420", 00:21:03.193 "trtype": "TCP" 00:21:03.193 }, 00:21:03.193 "peer_address": { 00:21:03.193 "adrfam": "IPv4", 00:21:03.193 "traddr": "10.0.0.1", 00:21:03.193 "trsvcid": "43974", 00:21:03.193 "trtype": "TCP" 00:21:03.193 }, 00:21:03.193 "qid": 0, 00:21:03.193 "state": "enabled", 00:21:03.193 "thread": "nvmf_tgt_poll_group_000" 00:21:03.193 } 00:21:03.193 ]' 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.193 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.194 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.461 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:21:03.461 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: --dhchap-ctrl-secret DHHC-1:02:Nzc4NzJkMmIzYTMyY2I5NjA3YTg1M2MzMmI0YTUzYjcwMzUwMzBhZDRiZDcyNGNl4N2nQw==: 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.042 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.301 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.868 00:21:04.868 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.868 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.868 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.127 { 00:21:05.127 "auth": { 00:21:05.127 "dhgroup": "ffdhe8192", 00:21:05.127 "digest": "sha512", 00:21:05.127 "state": "completed" 00:21:05.127 }, 00:21:05.127 "cntlid": 141, 00:21:05.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:05.127 "listen_address": { 00:21:05.127 "adrfam": "IPv4", 00:21:05.127 "traddr": "10.0.0.3", 00:21:05.127 "trsvcid": "4420", 00:21:05.127 "trtype": "TCP" 00:21:05.127 }, 00:21:05.127 "peer_address": { 00:21:05.127 "adrfam": "IPv4", 00:21:05.127 "traddr": "10.0.0.1", 00:21:05.127 "trsvcid": "44002", 00:21:05.127 "trtype": "TCP" 00:21:05.127 }, 00:21:05.127 "qid": 0, 00:21:05.127 "state": "enabled", 00:21:05.127 "thread": "nvmf_tgt_poll_group_000" 00:21:05.127 } 00:21:05.127 ]' 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.127 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.385 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.385 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.385 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.385 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.385 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.643 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:21:05.643 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:01:ZmZkNTcwYzhkZDI5MjhkZWM5N2U4ZWUxZGQyYjg2Y2NdcdgX: 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.211 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.470 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.038 00:21:07.038 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.038 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.038 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.296 { 00:21:07.296 "auth": { 00:21:07.296 "dhgroup": "ffdhe8192", 00:21:07.296 "digest": "sha512", 00:21:07.296 "state": "completed" 00:21:07.296 }, 00:21:07.296 "cntlid": 143, 00:21:07.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:07.296 "listen_address": { 00:21:07.296 "adrfam": "IPv4", 00:21:07.296 "traddr": "10.0.0.3", 00:21:07.296 "trsvcid": "4420", 00:21:07.296 "trtype": "TCP" 00:21:07.296 }, 00:21:07.296 "peer_address": { 00:21:07.296 "adrfam": "IPv4", 00:21:07.296 "traddr": "10.0.0.1", 00:21:07.296 "trsvcid": "40040", 00:21:07.296 "trtype": "TCP" 00:21:07.296 }, 00:21:07.296 "qid": 0, 00:21:07.296 "state": "enabled", 00:21:07.296 "thread": "nvmf_tgt_poll_group_000" 00:21:07.296 } 00:21:07.296 ]' 00:21:07.296 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.296 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.296 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.555 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.555 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.555 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.555 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.555 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.814 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:07.814 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.382 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.642 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.577 00:21:09.577 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.577 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.577 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.835 { 00:21:09.835 "auth": { 00:21:09.835 "dhgroup": "ffdhe8192", 00:21:09.835 "digest": "sha512", 00:21:09.835 "state": "completed" 00:21:09.835 }, 00:21:09.835 "cntlid": 145, 00:21:09.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:09.835 "listen_address": { 00:21:09.835 "adrfam": "IPv4", 00:21:09.835 "traddr": "10.0.0.3", 00:21:09.835 "trsvcid": "4420", 00:21:09.835 "trtype": "TCP" 00:21:09.835 }, 00:21:09.835 "peer_address": { 00:21:09.835 "adrfam": "IPv4", 00:21:09.835 "traddr": "10.0.0.1", 00:21:09.835 "trsvcid": "40078", 00:21:09.835 "trtype": "TCP" 00:21:09.835 }, 00:21:09.835 "qid": 0, 00:21:09.835 "state": "enabled", 00:21:09.835 "thread": "nvmf_tgt_poll_group_000" 00:21:09.835 } 00:21:09.835 ]' 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.835 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.093 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:21:10.093 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:00:NTEwNzk5MzNiZjBmMzg4YmY1ZWI1NjNiNWE5NzgxZmFiMDJkODQ0OWE0NjI4NmYzvgpGLQ==: --dhchap-ctrl-secret DHHC-1:03:NjE1ZWFjYTcwNmEzN2VjM2QzNWY4NmUxMGE0MmEyZDVlYmFlMWVmMjZmZjczYjM2OGRjMTMwMzNmZDRjYzAzZuusyaw=: 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:11.028 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:11.287 2024/12/14 18:41:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:11.287 request: 00:21:11.287 { 00:21:11.287 "method": "bdev_nvme_attach_controller", 00:21:11.287 "params": { 00:21:11.287 "name": "nvme0", 00:21:11.287 "trtype": "tcp", 00:21:11.287 "traddr": "10.0.0.3", 00:21:11.287 "adrfam": "ipv4", 00:21:11.287 "trsvcid": "4420", 00:21:11.287 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:11.287 "prchk_reftag": false, 00:21:11.287 "prchk_guard": false, 00:21:11.287 "hdgst": false, 00:21:11.287 "ddgst": false, 00:21:11.287 "dhchap_key": "key2", 00:21:11.287 "allow_unrecognized_csi": false 00:21:11.287 } 00:21:11.287 } 00:21:11.287 Got JSON-RPC error response 00:21:11.287 GoRPCClient: error on JSON-RPC call 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:11.287 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.288 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:11.288 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.288 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:11.288 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:11.288 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:11.855 2024/12/14 18:41:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:11.856 request: 00:21:11.856 { 00:21:11.856 "method": "bdev_nvme_attach_controller", 00:21:11.856 "params": { 00:21:11.856 "name": "nvme0", 00:21:11.856 "trtype": "tcp", 00:21:11.856 "traddr": "10.0.0.3", 00:21:11.856 "adrfam": "ipv4", 00:21:11.856 "trsvcid": "4420", 00:21:11.856 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:11.856 "prchk_reftag": false, 00:21:11.856 "prchk_guard": false, 00:21:11.856 "hdgst": false, 00:21:11.856 "ddgst": false, 00:21:11.856 "dhchap_key": "key1", 00:21:11.856 "dhchap_ctrlr_key": "ckey2", 00:21:11.856 "allow_unrecognized_csi": false 00:21:11.856 } 00:21:11.856 } 00:21:11.856 Got JSON-RPC error response 00:21:11.856 GoRPCClient: error on JSON-RPC call 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.856 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.423 2024/12/14 18:41:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:12.423 request: 00:21:12.423 { 00:21:12.423 "method": "bdev_nvme_attach_controller", 00:21:12.423 "params": { 00:21:12.423 "name": "nvme0", 00:21:12.423 "trtype": "tcp", 00:21:12.423 "traddr": "10.0.0.3", 00:21:12.423 "adrfam": "ipv4", 00:21:12.423 "trsvcid": "4420", 00:21:12.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:12.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:12.423 "prchk_reftag": false, 00:21:12.423 "prchk_guard": false, 00:21:12.423 "hdgst": false, 00:21:12.423 "ddgst": false, 00:21:12.423 "dhchap_key": "key1", 00:21:12.423 "dhchap_ctrlr_key": "ckey1", 00:21:12.423 "allow_unrecognized_csi": false 00:21:12.423 } 00:21:12.423 } 00:21:12.423 Got JSON-RPC error response 00:21:12.423 GoRPCClient: error on JSON-RPC call 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 94187 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 94187 ']' 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 94187 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94187 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.423 killing process with pid 94187 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94187' 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 94187 00:21:12.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 94187 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=99020 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 99020 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 99020 ']' 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.681 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.940 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.940 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 99020 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 99020 ']' 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.886 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.162 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.162 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:14.162 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:14.162 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.162 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.421 null0 00:21:14.421 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.421 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xmg 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oid ]] 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oid 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zu7 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.obU ]] 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.obU 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.T5z 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.auT ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.auT 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.X77 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.422 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.359 nvme0n1 00:21:15.359 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.359 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.359 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.618 { 00:21:15.618 "auth": { 00:21:15.618 "dhgroup": "ffdhe8192", 00:21:15.618 "digest": "sha512", 00:21:15.618 "state": "completed" 00:21:15.618 }, 00:21:15.618 "cntlid": 1, 00:21:15.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:15.618 "listen_address": { 00:21:15.618 "adrfam": "IPv4", 00:21:15.618 "traddr": "10.0.0.3", 00:21:15.618 "trsvcid": "4420", 00:21:15.618 "trtype": "TCP" 00:21:15.618 }, 00:21:15.618 "peer_address": { 00:21:15.618 "adrfam": "IPv4", 00:21:15.618 "traddr": "10.0.0.1", 00:21:15.618 "trsvcid": "40150", 00:21:15.618 "trtype": "TCP" 00:21:15.618 }, 00:21:15.618 "qid": 0, 00:21:15.618 "state": "enabled", 00:21:15.618 "thread": "nvmf_tgt_poll_group_000" 00:21:15.618 } 00:21:15.618 ]' 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.618 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.877 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.877 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.877 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.136 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:16.136 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key3 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:16.703 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.962 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.530 2024/12/14 18:41:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:17.530 request: 00:21:17.530 { 00:21:17.530 "method": "bdev_nvme_attach_controller", 00:21:17.530 "params": { 00:21:17.530 "name": "nvme0", 00:21:17.530 "trtype": "tcp", 00:21:17.530 "traddr": "10.0.0.3", 00:21:17.530 "adrfam": "ipv4", 00:21:17.530 "trsvcid": "4420", 00:21:17.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:17.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:17.530 "prchk_reftag": false, 00:21:17.530 "prchk_guard": false, 00:21:17.530 "hdgst": false, 00:21:17.530 "ddgst": false, 00:21:17.530 "dhchap_key": "key3", 00:21:17.530 "allow_unrecognized_csi": false 00:21:17.530 } 00:21:17.530 } 00:21:17.530 Got JSON-RPC error response 00:21:17.530 GoRPCClient: error on JSON-RPC call 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:17.530 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.531 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.098 2024/12/14 18:41:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:18.098 request: 00:21:18.098 { 00:21:18.098 "method": "bdev_nvme_attach_controller", 00:21:18.098 "params": { 00:21:18.098 "name": "nvme0", 00:21:18.098 "trtype": "tcp", 00:21:18.098 "traddr": "10.0.0.3", 00:21:18.098 "adrfam": "ipv4", 00:21:18.098 "trsvcid": "4420", 00:21:18.098 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:18.098 "prchk_reftag": false, 00:21:18.098 "prchk_guard": false, 00:21:18.098 "hdgst": false, 00:21:18.098 "ddgst": false, 00:21:18.098 "dhchap_key": "key3", 00:21:18.098 "allow_unrecognized_csi": false 00:21:18.098 } 00:21:18.098 } 00:21:18.098 Got JSON-RPC error response 00:21:18.098 GoRPCClient: error on JSON-RPC call 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.098 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.357 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.358 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.925 2024/12/14 18:41:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:18.925 request: 00:21:18.925 { 00:21:18.925 "method": "bdev_nvme_attach_controller", 00:21:18.925 "params": { 00:21:18.925 "name": "nvme0", 00:21:18.925 "trtype": "tcp", 00:21:18.925 "traddr": "10.0.0.3", 00:21:18.925 "adrfam": "ipv4", 00:21:18.925 "trsvcid": "4420", 00:21:18.925 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:18.925 "prchk_reftag": false, 00:21:18.925 "prchk_guard": false, 00:21:18.925 "hdgst": false, 00:21:18.925 "ddgst": false, 00:21:18.925 "dhchap_key": "key0", 00:21:18.925 "dhchap_ctrlr_key": "key1", 00:21:18.925 "allow_unrecognized_csi": false 00:21:18.925 } 00:21:18.925 } 00:21:18.925 Got JSON-RPC error response 00:21:18.925 GoRPCClient: error on JSON-RPC call 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:18.925 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:19.183 nvme0n1 00:21:19.183 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:19.183 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:19.183 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.750 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.750 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.750 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:20.008 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:20.945 nvme0n1 00:21:20.945 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:20.945 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.945 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:21.204 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.463 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.463 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:21.463 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid 81b74ad8-8517-4157-88ee-39db73f4d0b6 -l 0 --dhchap-secret DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: --dhchap-ctrl-secret DHHC-1:03:ZTY2MDI3NTBjMDU4NDVhMDRmNDFmZmRiZmMxNTZiZTg0MDFlMmQ2MGY5MTIzOGU3M2MxY2ZmNzg1MTA5ZTJhNrrY/cA=: 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.030 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:22.598 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:23.166 2024/12/14 18:41:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:23.166 request: 00:21:23.166 { 00:21:23.166 "method": "bdev_nvme_attach_controller", 00:21:23.166 "params": { 00:21:23.166 "name": "nvme0", 00:21:23.166 "trtype": "tcp", 00:21:23.166 "traddr": "10.0.0.3", 00:21:23.166 "adrfam": "ipv4", 00:21:23.166 "trsvcid": "4420", 00:21:23.166 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6", 00:21:23.166 "prchk_reftag": false, 00:21:23.166 "prchk_guard": false, 00:21:23.166 "hdgst": false, 00:21:23.166 "ddgst": false, 00:21:23.166 "dhchap_key": "key1", 00:21:23.166 "allow_unrecognized_csi": false 00:21:23.166 } 00:21:23.166 } 00:21:23.166 Got JSON-RPC error response 00:21:23.166 GoRPCClient: error on JSON-RPC call 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:23.166 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:24.104 nvme0n1 00:21:24.104 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:24.104 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:24.104 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.363 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.363 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.363 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:24.622 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:24.880 nvme0n1 00:21:24.880 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:24.880 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.880 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:25.139 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.139 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.139 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: '' 2s 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: ]] 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTczYjkyNjE2ZjJlZjU2NmU3NGE0ZTdlMDlkOWE1NDDZ4SsK: 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:25.706 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: 2s 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: ]] 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTA3MzExNGYzNGNmODk3YzMwZmJlOWI0NWI2MjZhMjg5NjkzNjNjZWM4ZmMyNjk5JR1UeQ==: 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:27.620 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:29.555 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.813 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:29.813 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.813 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.813 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.814 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:29.814 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:29.814 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:30.749 nvme0n1 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.749 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.317 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:31.317 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:31.317 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:31.576 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:31.834 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:31.834 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:31.834 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.093 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.661 2024/12/14 18:41:48 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:32.661 request: 00:21:32.661 { 00:21:32.661 "method": "bdev_nvme_set_keys", 00:21:32.661 "params": { 00:21:32.661 "name": "nvme0", 00:21:32.661 "dhchap_key": "key1", 00:21:32.661 "dhchap_ctrlr_key": "key3" 00:21:32.661 } 00:21:32.661 } 00:21:32.661 Got JSON-RPC error response 00:21:32.661 GoRPCClient: error on JSON-RPC call 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:32.661 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.919 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:32.919 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.297 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.297 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.297 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.297 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.297 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:35.233 nvme0n1 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.233 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.800 request: 00:21:35.800 { 00:21:35.800 "method": "bdev_nvme_set_keys", 00:21:35.800 "params": { 00:21:35.800 "name": "nvme0", 00:21:35.800 "dhchap_key": "key2", 00:21:35.800 "dhchap_ctrlr_key": "key0" 00:21:35.800 } 00:21:35.800 } 00:21:35.800 Got JSON-RPC error response 00:21:35.800 GoRPCClient: error on JSON-RPC call 00:21:35.800 2024/12/14 18:41:51 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:35.800 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:35.801 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.368 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:36.368 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:37.305 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:37.305 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.305 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94219 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 94219 ']' 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 94219 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94219 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:37.564 killing process with pid 94219 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94219' 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 94219 00:21:37.564 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 94219 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.132 rmmod nvme_tcp 00:21:38.132 rmmod nvme_fabrics 00:21:38.132 rmmod nvme_keyring 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 99020 ']' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 99020 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 99020 ']' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 99020 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99020 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.132 killing process with pid 99020 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99020' 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 99020 00:21:38.132 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 99020 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:38.391 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:21:38.649 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xmg /tmp/spdk.key-sha256.Zu7 /tmp/spdk.key-sha384.T5z /tmp/spdk.key-sha512.X77 /tmp/spdk.key-sha512.oid /tmp/spdk.key-sha384.obU /tmp/spdk.key-sha256.auT '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:38.649 00:21:38.649 real 3m9.256s 00:21:38.649 user 7m41.529s 00:21:38.649 sys 0m24.349s 00:21:38.650 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.650 ************************************ 00:21:38.650 END TEST nvmf_auth_target 00:21:38.650 ************************************ 00:21:38.650 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.909 ************************************ 00:21:38.909 START TEST nvmf_bdevio_no_huge 00:21:38.909 ************************************ 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.909 * Looking for test storage... 00:21:38.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:38.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.909 --rc genhtml_branch_coverage=1 00:21:38.909 --rc genhtml_function_coverage=1 00:21:38.909 --rc genhtml_legend=1 00:21:38.909 --rc geninfo_all_blocks=1 00:21:38.909 --rc geninfo_unexecuted_blocks=1 00:21:38.909 00:21:38.909 ' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:38.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.909 --rc genhtml_branch_coverage=1 00:21:38.909 --rc genhtml_function_coverage=1 00:21:38.909 --rc genhtml_legend=1 00:21:38.909 --rc geninfo_all_blocks=1 00:21:38.909 --rc geninfo_unexecuted_blocks=1 00:21:38.909 00:21:38.909 ' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:38.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.909 --rc genhtml_branch_coverage=1 00:21:38.909 --rc genhtml_function_coverage=1 00:21:38.909 --rc genhtml_legend=1 00:21:38.909 --rc geninfo_all_blocks=1 00:21:38.909 --rc geninfo_unexecuted_blocks=1 00:21:38.909 00:21:38.909 ' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:38.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.909 --rc genhtml_branch_coverage=1 00:21:38.909 --rc genhtml_function_coverage=1 00:21:38.909 --rc genhtml_legend=1 00:21:38.909 --rc geninfo_all_blocks=1 00:21:38.909 --rc geninfo_unexecuted_blocks=1 00:21:38.909 00:21:38.909 ' 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.909 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.910 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.169 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:39.169 Cannot find device "nvmf_init_br" 00:21:39.169 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:39.170 Cannot find device "nvmf_init_br2" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:39.170 Cannot find device "nvmf_tgt_br" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.170 Cannot find device "nvmf_tgt_br2" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:39.170 Cannot find device "nvmf_init_br" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:39.170 Cannot find device "nvmf_init_br2" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:39.170 Cannot find device "nvmf_tgt_br" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:39.170 Cannot find device "nvmf_tgt_br2" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:39.170 Cannot find device "nvmf_br" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:39.170 Cannot find device "nvmf_init_if" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:39.170 Cannot find device "nvmf_init_if2" 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:39.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:39.170 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:39.429 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:39.429 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:39.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:39.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:21:39.430 00:21:39.430 --- 10.0.0.3 ping statistics --- 00:21:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.430 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:39.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:39.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:21:39.430 00:21:39.430 --- 10.0.0.4 ping statistics --- 00:21:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.430 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:39.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:39.430 00:21:39.430 --- 10.0.0.1 ping statistics --- 00:21:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.430 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:39.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:21:39.430 00:21:39.430 --- 10.0.0.2 ping statistics --- 00:21:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.430 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=99897 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 99897 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 99897 ']' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.430 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:39.689 [2024-12-14 18:41:55.193285] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:39.689 [2024-12-14 18:41:55.193398] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:39.689 [2024-12-14 18:41:55.346078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.947 [2024-12-14 18:41:55.472491] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.947 [2024-12-14 18:41:55.472554] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.947 [2024-12-14 18:41:55.472580] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.947 [2024-12-14 18:41:55.472591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.947 [2024-12-14 18:41:55.472614] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.947 [2024-12-14 18:41:55.472799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.947 [2024-12-14 18:41:55.473203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:21:39.947 [2024-12-14 18:41:55.473342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:21:39.947 [2024-12-14 18:41:55.473350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.514 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.514 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:40.514 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:40.514 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.514 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 [2024-12-14 18:41:56.301225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 Malloc0 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.774 [2024-12-14 18:41:56.345425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:40.774 { 00:21:40.774 "params": { 00:21:40.774 "name": "Nvme$subsystem", 00:21:40.774 "trtype": "$TEST_TRANSPORT", 00:21:40.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.774 "adrfam": "ipv4", 00:21:40.774 "trsvcid": "$NVMF_PORT", 00:21:40.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.774 "hdgst": ${hdgst:-false}, 00:21:40.774 "ddgst": ${ddgst:-false} 00:21:40.774 }, 00:21:40.774 "method": "bdev_nvme_attach_controller" 00:21:40.774 } 00:21:40.774 EOF 00:21:40.774 )") 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:21:40.774 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:40.774 "params": { 00:21:40.774 "name": "Nvme1", 00:21:40.774 "trtype": "tcp", 00:21:40.774 "traddr": "10.0.0.3", 00:21:40.774 "adrfam": "ipv4", 00:21:40.774 "trsvcid": "4420", 00:21:40.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.774 "hdgst": false, 00:21:40.774 "ddgst": false 00:21:40.774 }, 00:21:40.774 "method": "bdev_nvme_attach_controller" 00:21:40.774 }' 00:21:40.774 [2024-12-14 18:41:56.419350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:40.774 [2024-12-14 18:41:56.419458] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99951 ] 00:21:41.033 [2024-12-14 18:41:56.565644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.033 [2024-12-14 18:41:56.724519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.033 [2024-12-14 18:41:56.724684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.033 [2024-12-14 18:41:56.724685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.291 I/O targets: 00:21:41.291 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:41.291 00:21:41.291 00:21:41.291 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.291 http://cunit.sourceforge.net/ 00:21:41.291 00:21:41.291 00:21:41.291 Suite: bdevio tests on: Nvme1n1 00:21:41.291 Test: blockdev write read block ...passed 00:21:41.291 Test: blockdev write zeroes read block ...passed 00:21:41.291 Test: blockdev write zeroes read no split ...passed 00:21:41.550 Test: blockdev write zeroes read split ...passed 00:21:41.550 Test: blockdev write zeroes read split partial ...passed 00:21:41.550 Test: blockdev reset ...[2024-12-14 18:41:57.063980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.550 [2024-12-14 18:41:57.064091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf8b30 (9): Bad file descriptor 00:21:41.550 [2024-12-14 18:41:57.083771] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:41.550 passed 00:21:41.550 Test: blockdev write read 8 blocks ...passed 00:21:41.550 Test: blockdev write read size > 128k ...passed 00:21:41.550 Test: blockdev write read invalid size ...passed 00:21:41.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:41.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:41.550 Test: blockdev write read max offset ...passed 00:21:41.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:41.550 Test: blockdev writev readv 8 blocks ...passed 00:21:41.550 Test: blockdev writev readv 30 x 1block ...passed 00:21:41.550 Test: blockdev writev readv block ...passed 00:21:41.550 Test: blockdev writev readv size > 128k ...passed 00:21:41.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:41.550 Test: blockdev comparev and writev ...[2024-12-14 18:41:57.258636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.550 [2024-12-14 18:41:57.258755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.550 [2024-12-14 18:41:57.258790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.550 [2024-12-14 18:41:57.258804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.550 [2024-12-14 18:41:57.259301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.550 [2024-12-14 18:41:57.259325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:41.550 [2024-12-14 18:41:57.259355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.550 [2024-12-14 18:41:57.259364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:41.551 [2024-12-14 18:41:57.259680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.551 [2024-12-14 18:41:57.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:41.551 [2024-12-14 18:41:57.259714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.551 [2024-12-14 18:41:57.259736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:41.551 [2024-12-14 18:41:57.260016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.551 [2024-12-14 18:41:57.260033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:41.551 [2024-12-14 18:41:57.260049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.551 [2024-12-14 18:41:57.260058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:41.551 passed 00:21:41.810 Test: blockdev nvme passthru rw ...passed 00:21:41.810 Test: blockdev nvme passthru vendor specific ...[2024-12-14 18:41:57.344037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.810 [2024-12-14 18:41:57.344086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:41.810 [2024-12-14 18:41:57.344271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.810 [2024-12-14 18:41:57.344287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:41.810 [2024-12-14 18:41:57.344425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.810 [2024-12-14 18:41:57.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:41.810 [2024-12-14 18:41:57.344555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.810 [2024-12-14 18:41:57.344570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:41.810 passed 00:21:41.810 Test: blockdev nvme admin passthru ...passed 00:21:41.810 Test: blockdev copy ...passed 00:21:41.810 00:21:41.810 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.810 suites 1 1 n/a 0 0 00:21:41.810 tests 23 23 23 0 0 00:21:41.810 asserts 152 152 152 0 n/a 00:21:41.810 00:21:41.810 Elapsed time = 0.939 seconds 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:42.068 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.327 rmmod nvme_tcp 00:21:42.327 rmmod nvme_fabrics 00:21:42.327 rmmod nvme_keyring 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 99897 ']' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 99897 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 99897 ']' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 99897 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99897 00:21:42.327 killing process with pid 99897 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99897' 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 99897 00:21:42.327 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 99897 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:42.895 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:21:42.896 00:21:42.896 real 0m4.192s 00:21:42.896 user 0m13.631s 00:21:42.896 sys 0m1.608s 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.896 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:42.896 ************************************ 00:21:42.896 END TEST nvmf_bdevio_no_huge 00:21:42.896 ************************************ 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.156 ************************************ 00:21:43.156 START TEST nvmf_tls 00:21:43.156 ************************************ 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:43.156 * Looking for test storage... 00:21:43.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.156 --rc genhtml_branch_coverage=1 00:21:43.156 --rc genhtml_function_coverage=1 00:21:43.156 --rc genhtml_legend=1 00:21:43.156 --rc geninfo_all_blocks=1 00:21:43.156 --rc geninfo_unexecuted_blocks=1 00:21:43.156 00:21:43.156 ' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.156 --rc genhtml_branch_coverage=1 00:21:43.156 --rc genhtml_function_coverage=1 00:21:43.156 --rc genhtml_legend=1 00:21:43.156 --rc geninfo_all_blocks=1 00:21:43.156 --rc geninfo_unexecuted_blocks=1 00:21:43.156 00:21:43.156 ' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.156 --rc genhtml_branch_coverage=1 00:21:43.156 --rc genhtml_function_coverage=1 00:21:43.156 --rc genhtml_legend=1 00:21:43.156 --rc geninfo_all_blocks=1 00:21:43.156 --rc geninfo_unexecuted_blocks=1 00:21:43.156 00:21:43.156 ' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.156 --rc genhtml_branch_coverage=1 00:21:43.156 --rc genhtml_function_coverage=1 00:21:43.156 --rc genhtml_legend=1 00:21:43.156 --rc geninfo_all_blocks=1 00:21:43.156 --rc geninfo_unexecuted_blocks=1 00:21:43.156 00:21:43.156 ' 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:43.156 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.157 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.157 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:43.416 Cannot find device "nvmf_init_br" 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:43.416 Cannot find device "nvmf_init_br2" 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:43.416 Cannot find device "nvmf_tgt_br" 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.416 Cannot find device "nvmf_tgt_br2" 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:43.416 Cannot find device "nvmf_init_br" 00:21:43.416 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:21:43.417 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:43.417 Cannot find device "nvmf_init_br2" 00:21:43.417 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:21:43.417 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:43.417 Cannot find device "nvmf_tgt_br" 00:21:43.417 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:21:43.417 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:43.417 Cannot find device "nvmf_tgt_br2" 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:43.417 Cannot find device "nvmf_br" 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:43.417 Cannot find device "nvmf_init_if" 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:43.417 Cannot find device "nvmf_init_if2" 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:43.417 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:43.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:21:43.676 00:21:43.676 --- 10.0.0.3 ping statistics --- 00:21:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.676 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:43.676 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:43.676 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:21:43.676 00:21:43.676 --- 10.0.0.4 ping statistics --- 00:21:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.676 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:43.676 00:21:43.676 --- 10.0.0.1 ping statistics --- 00:21:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.676 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:43.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:43.676 00:21:43.676 --- 10.0.0.2 ping statistics --- 00:21:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.676 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:43.676 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=100190 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 100190 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100190 ']' 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.677 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.677 [2024-12-14 18:41:59.388436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:43.677 [2024-12-14 18:41:59.388523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.936 [2024-12-14 18:41:59.535174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.936 [2024-12-14 18:41:59.625408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.936 [2024-12-14 18:41:59.625483] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.936 [2024-12-14 18:41:59.625497] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.936 [2024-12-14 18:41:59.625508] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.936 [2024-12-14 18:41:59.625517] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.936 [2024-12-14 18:41:59.625551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:44.872 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:45.131 true 00:21:45.131 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.131 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:45.388 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:45.388 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:45.388 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:45.955 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.955 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:46.213 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:46.213 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:46.213 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:46.478 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:46.478 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:46.762 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:46.762 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:46.762 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:46.762 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.036 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:47.036 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:47.036 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:47.293 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.293 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:47.552 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:47.552 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:47.552 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:47.811 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.811 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9yUKkHxCB4 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.tdzgUTXCIY 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9yUKkHxCB4 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.tdzgUTXCIY 00:21:48.379 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:48.638 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:49.206 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9yUKkHxCB4 00:21:49.206 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9yUKkHxCB4 00:21:49.206 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.206 [2024-12-14 18:42:04.936352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.468 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.728 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:49.986 [2024-12-14 18:42:05.488408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.986 [2024-12-14 18:42:05.488688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:49.986 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.245 malloc0 00:21:50.245 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.504 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9yUKkHxCB4 00:21:50.762 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.021 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9yUKkHxCB4 00:22:03.230 Initializing NVMe Controllers 00:22:03.230 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.230 Initialization complete. Launching workers. 00:22:03.230 ======================================================== 00:22:03.230 Latency(us) 00:22:03.230 Device Information : IOPS MiB/s Average min max 00:22:03.230 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10480.62 40.94 6107.73 1581.19 8663.00 00:22:03.230 ======================================================== 00:22:03.230 Total : 10480.62 40.94 6107.73 1581.19 8663.00 00:22:03.230 00:22:03.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9yUKkHxCB4 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9yUKkHxCB4 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100561 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100561 /var/tmp/bdevperf.sock 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100561 ']' 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.230 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.230 [2024-12-14 18:42:16.954567] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:03.230 [2024-12-14 18:42:16.954921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100561 ] 00:22:03.230 [2024-12-14 18:42:17.093261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.230 [2024-12-14 18:42:17.183857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.230 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.230 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:03.230 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9yUKkHxCB4 00:22:03.230 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.230 [2024-12-14 18:42:18.529823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.230 TLSTESTn1 00:22:03.230 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:03.230 Running I/O for 10 seconds... 00:22:05.102 3879.00 IOPS, 15.15 MiB/s [2024-12-14T18:42:21.791Z] 3941.00 IOPS, 15.39 MiB/s [2024-12-14T18:42:22.727Z] 3972.67 IOPS, 15.52 MiB/s [2024-12-14T18:42:24.104Z] 3959.25 IOPS, 15.47 MiB/s [2024-12-14T18:42:25.040Z] 3958.20 IOPS, 15.46 MiB/s [2024-12-14T18:42:25.975Z] 3952.17 IOPS, 15.44 MiB/s [2024-12-14T18:42:26.911Z] 3953.29 IOPS, 15.44 MiB/s [2024-12-14T18:42:27.847Z] 3955.75 IOPS, 15.45 MiB/s [2024-12-14T18:42:28.784Z] 3967.89 IOPS, 15.50 MiB/s [2024-12-14T18:42:28.784Z] 3970.50 IOPS, 15.51 MiB/s 00:22:13.031 Latency(us) 00:22:13.031 [2024-12-14T18:42:28.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.031 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.031 Verification LBA range: start 0x0 length 0x2000 00:22:13.031 TLSTESTn1 : 10.01 3977.07 15.54 0.00 0.00 32134.07 4319.42 28120.90 00:22:13.031 [2024-12-14T18:42:28.784Z] =================================================================================================================== 00:22:13.031 [2024-12-14T18:42:28.784Z] Total : 3977.07 15.54 0.00 0.00 32134.07 4319.42 28120.90 00:22:13.031 { 00:22:13.031 "results": [ 00:22:13.031 { 00:22:13.031 "job": "TLSTESTn1", 00:22:13.031 "core_mask": "0x4", 00:22:13.031 "workload": "verify", 00:22:13.031 "status": "finished", 00:22:13.031 "verify_range": { 00:22:13.031 "start": 0, 00:22:13.031 "length": 8192 00:22:13.031 }, 00:22:13.031 "queue_depth": 128, 00:22:13.031 "io_size": 4096, 00:22:13.031 "runtime": 10.014909, 00:22:13.031 "iops": 3977.0705854641315, 00:22:13.031 "mibps": 15.535431974469263, 00:22:13.031 "io_failed": 0, 00:22:13.031 "io_timeout": 0, 00:22:13.031 "avg_latency_us": 32134.07161673476, 00:22:13.031 "min_latency_us": 4319.418181818181, 00:22:13.031 "max_latency_us": 28120.901818181817 00:22:13.031 } 00:22:13.031 ], 00:22:13.031 "core_count": 1 00:22:13.031 } 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 100561 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100561 ']' 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100561 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.031 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100561 00:22:13.290 killing process with pid 100561 00:22:13.290 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.290 00:22:13.290 Latency(us) 00:22:13.290 [2024-12-14T18:42:29.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.290 [2024-12-14T18:42:29.043Z] =================================================================================================================== 00:22:13.290 [2024-12-14T18:42:29.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.290 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:13.290 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:13.290 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100561' 00:22:13.290 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100561 00:22:13.290 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100561 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tdzgUTXCIY 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tdzgUTXCIY 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tdzgUTXCIY 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tdzgUTXCIY 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100720 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100720 /var/tmp/bdevperf.sock 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100720 ']' 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.290 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.549 [2024-12-14 18:42:29.085812] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:13.549 [2024-12-14 18:42:29.085908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100720 ] 00:22:13.549 [2024-12-14 18:42:29.220453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.808 [2024-12-14 18:42:29.307520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.808 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.808 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:13.808 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tdzgUTXCIY 00:22:14.067 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.327 [2024-12-14 18:42:29.985476] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.327 [2024-12-14 18:42:29.997473] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.327 [2024-12-14 18:42:29.998353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e4460 (107): Transport endpoint is not connected 00:22:14.327 [2024-12-14 18:42:29.999326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e4460 (9): Bad file descriptor 00:22:14.327 [2024-12-14 18:42:30.000323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.327 [2024-12-14 18:42:30.000362] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:14.327 [2024-12-14 18:42:30.000388] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:14.327 [2024-12-14 18:42:30.000403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.327 2024/12/14 18:42:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:14.327 request: 00:22:14.327 { 00:22:14.327 "method": "bdev_nvme_attach_controller", 00:22:14.327 "params": { 00:22:14.327 "name": "TLSTEST", 00:22:14.327 "trtype": "tcp", 00:22:14.327 "traddr": "10.0.0.3", 00:22:14.327 "adrfam": "ipv4", 00:22:14.327 "trsvcid": "4420", 00:22:14.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.327 "prchk_reftag": false, 00:22:14.327 "prchk_guard": false, 00:22:14.327 "hdgst": false, 00:22:14.327 "ddgst": false, 00:22:14.327 "psk": "key0", 00:22:14.327 "allow_unrecognized_csi": false 00:22:14.327 } 00:22:14.327 } 00:22:14.327 Got JSON-RPC error response 00:22:14.327 GoRPCClient: error on JSON-RPC call 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100720 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100720 ']' 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100720 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100720 00:22:14.327 killing process with pid 100720 00:22:14.327 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.327 00:22:14.327 Latency(us) 00:22:14.327 [2024-12-14T18:42:30.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.327 [2024-12-14T18:42:30.080Z] =================================================================================================================== 00:22:14.327 [2024-12-14T18:42:30.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100720' 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100720 00:22:14.327 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100720 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9yUKkHxCB4 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9yUKkHxCB4 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9yUKkHxCB4 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9yUKkHxCB4 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100759 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100759 /var/tmp/bdevperf.sock 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100759 ']' 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.595 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.966 [2024-12-14 18:42:30.363816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:14.966 [2024-12-14 18:42:30.364101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100759 ] 00:22:14.966 [2024-12-14 18:42:30.506073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.966 [2024-12-14 18:42:30.613451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.902 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.902 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:15.902 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9yUKkHxCB4 00:22:15.903 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:16.470 [2024-12-14 18:42:31.928516] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.471 [2024-12-14 18:42:31.937538] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:16.471 [2024-12-14 18:42:31.937578] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:16.471 [2024-12-14 18:42:31.937648] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.471 [2024-12-14 18:42:31.938419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a460 (107): Transport endpoint is not connected 00:22:16.471 [2024-12-14 18:42:31.939409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a460 (9): Bad file descriptor 00:22:16.471 [2024-12-14 18:42:31.940404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.471 [2024-12-14 18:42:31.940447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:16.471 [2024-12-14 18:42:31.940464] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:16.471 [2024-12-14 18:42:31.940482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.471 2024/12/14 18:42:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:16.471 request: 00:22:16.471 { 00:22:16.471 "method": "bdev_nvme_attach_controller", 00:22:16.471 "params": { 00:22:16.471 "name": "TLSTEST", 00:22:16.471 "trtype": "tcp", 00:22:16.471 "traddr": "10.0.0.3", 00:22:16.471 "adrfam": "ipv4", 00:22:16.471 "trsvcid": "4420", 00:22:16.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.471 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.471 "prchk_reftag": false, 00:22:16.471 "prchk_guard": false, 00:22:16.471 "hdgst": false, 00:22:16.471 "ddgst": false, 00:22:16.471 "psk": "key0", 00:22:16.471 "allow_unrecognized_csi": false 00:22:16.471 } 00:22:16.471 } 00:22:16.471 Got JSON-RPC error response 00:22:16.471 GoRPCClient: error on JSON-RPC call 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100759 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100759 ']' 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100759 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.471 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100759 00:22:16.471 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:16.471 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:16.471 killing process with pid 100759 00:22:16.471 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100759' 00:22:16.471 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.471 00:22:16.471 Latency(us) 00:22:16.471 [2024-12-14T18:42:32.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.471 [2024-12-14T18:42:32.224Z] =================================================================================================================== 00:22:16.471 [2024-12-14T18:42:32.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.471 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100759 00:22:16.471 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100759 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9yUKkHxCB4 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9yUKkHxCB4 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9yUKkHxCB4 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9yUKkHxCB4 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100817 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100817 /var/tmp/bdevperf.sock 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100817 ']' 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.731 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.731 [2024-12-14 18:42:32.373864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:16.732 [2024-12-14 18:42:32.373961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100817 ] 00:22:16.990 [2024-12-14 18:42:32.515999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.990 [2024-12-14 18:42:32.577729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.990 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.990 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:16.990 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9yUKkHxCB4 00:22:17.558 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:17.817 [2024-12-14 18:42:33.336767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.817 [2024-12-14 18:42:33.347732] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:17.817 [2024-12-14 18:42:33.347767] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:17.817 [2024-12-14 18:42:33.347828] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.817 [2024-12-14 18:42:33.348753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4460 (107): Transport endpoint is not connected 00:22:17.817 [2024-12-14 18:42:33.349742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4460 (9): Bad file descriptor 00:22:17.817 [2024-12-14 18:42:33.350738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:17.817 [2024-12-14 18:42:33.350765] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:17.817 [2024-12-14 18:42:33.350778] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:17.818 [2024-12-14 18:42:33.350821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:17.818 2024/12/14 18:42:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:17.818 request: 00:22:17.818 { 00:22:17.818 "method": "bdev_nvme_attach_controller", 00:22:17.818 "params": { 00:22:17.818 "name": "TLSTEST", 00:22:17.818 "trtype": "tcp", 00:22:17.818 "traddr": "10.0.0.3", 00:22:17.818 "adrfam": "ipv4", 00:22:17.818 "trsvcid": "4420", 00:22:17.818 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.818 "prchk_reftag": false, 00:22:17.818 "prchk_guard": false, 00:22:17.818 "hdgst": false, 00:22:17.818 "ddgst": false, 00:22:17.818 "psk": "key0", 00:22:17.818 "allow_unrecognized_csi": false 00:22:17.818 } 00:22:17.818 } 00:22:17.818 Got JSON-RPC error response 00:22:17.818 GoRPCClient: error on JSON-RPC call 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100817 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100817 ']' 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100817 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100817 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:17.818 killing process with pid 100817 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100817' 00:22:17.818 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.818 00:22:17.818 Latency(us) 00:22:17.818 [2024-12-14T18:42:33.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.818 [2024-12-14T18:42:33.571Z] =================================================================================================================== 00:22:17.818 [2024-12-14T18:42:33.571Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100817 00:22:17.818 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100817 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100862 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100862 /var/tmp/bdevperf.sock 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100862 ']' 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.077 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.077 [2024-12-14 18:42:33.807377] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:18.077 [2024-12-14 18:42:33.807500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100862 ] 00:22:18.335 [2024-12-14 18:42:33.947689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.335 [2024-12-14 18:42:34.018368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.269 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.269 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.269 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:19.530 [2024-12-14 18:42:35.118941] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:19.530 [2024-12-14 18:42:35.118992] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:19.530 2024/12/14 18:42:35 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:19.530 request: 00:22:19.530 { 00:22:19.530 "method": "keyring_file_add_key", 00:22:19.530 "params": { 00:22:19.530 "name": "key0", 00:22:19.530 "path": "" 00:22:19.530 } 00:22:19.530 } 00:22:19.530 Got JSON-RPC error response 00:22:19.530 GoRPCClient: error on JSON-RPC call 00:22:19.530 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:19.791 [2024-12-14 18:42:35.403109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.791 [2024-12-14 18:42:35.403201] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:19.791 2024/12/14 18:42:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:19.791 request: 00:22:19.791 { 00:22:19.791 "method": "bdev_nvme_attach_controller", 00:22:19.791 "params": { 00:22:19.791 "name": "TLSTEST", 00:22:19.791 "trtype": "tcp", 00:22:19.791 "traddr": "10.0.0.3", 00:22:19.791 "adrfam": "ipv4", 00:22:19.791 "trsvcid": "4420", 00:22:19.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.791 "prchk_reftag": false, 00:22:19.791 "prchk_guard": false, 00:22:19.791 "hdgst": false, 00:22:19.791 "ddgst": false, 00:22:19.791 "psk": "key0", 00:22:19.791 "allow_unrecognized_csi": false 00:22:19.791 } 00:22:19.791 } 00:22:19.791 Got JSON-RPC error response 00:22:19.791 GoRPCClient: error on JSON-RPC call 00:22:19.791 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100862 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100862 ']' 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100862 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100862 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:19.792 killing process with pid 100862 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100862' 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100862 00:22:19.792 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.792 00:22:19.792 Latency(us) 00:22:19.792 [2024-12-14T18:42:35.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.792 [2024-12-14T18:42:35.545Z] =================================================================================================================== 00:22:19.792 [2024-12-14T18:42:35.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.792 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100862 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 100190 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100190 ']' 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100190 00:22:20.050 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100190 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.051 killing process with pid 100190 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100190' 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100190 00:22:20.051 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100190 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:22:20.309 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lNsBQVAT5Y 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lNsBQVAT5Y 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=100930 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 100930 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100930 ']' 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.568 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.568 [2024-12-14 18:42:36.175530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:20.568 [2024-12-14 18:42:36.175658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.568 [2024-12-14 18:42:36.317797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.827 [2024-12-14 18:42:36.408479] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.827 [2024-12-14 18:42:36.408544] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.827 [2024-12-14 18:42:36.408554] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.827 [2024-12-14 18:42:36.408566] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.827 [2024-12-14 18:42:36.408572] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.827 [2024-12-14 18:42:36.408606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lNsBQVAT5Y 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:21.764 [2024-12-14 18:42:37.489316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.764 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:22.332 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:22.332 [2024-12-14 18:42:38.065400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.332 [2024-12-14 18:42:38.065666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:22.591 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:22.850 malloc0 00:22:22.850 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.108 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:23.367 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lNsBQVAT5Y 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lNsBQVAT5Y 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101046 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101046 /var/tmp/bdevperf.sock 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101046 ']' 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.626 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.626 [2024-12-14 18:42:39.266460] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:23.626 [2024-12-14 18:42:39.266575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101046 ] 00:22:23.886 [2024-12-14 18:42:39.407581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.886 [2024-12-14 18:42:39.493096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.823 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.823 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.823 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:24.823 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.391 [2024-12-14 18:42:40.849188] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.391 TLSTESTn1 00:22:25.391 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:25.391 Running I/O for 10 seconds... 00:22:27.702 3990.00 IOPS, 15.59 MiB/s [2024-12-14T18:42:44.391Z] 4034.50 IOPS, 15.76 MiB/s [2024-12-14T18:42:45.326Z] 4050.33 IOPS, 15.82 MiB/s [2024-12-14T18:42:46.262Z] 4073.25 IOPS, 15.91 MiB/s [2024-12-14T18:42:47.272Z] 4096.80 IOPS, 16.00 MiB/s [2024-12-14T18:42:48.237Z] 4096.83 IOPS, 16.00 MiB/s [2024-12-14T18:42:49.173Z] 4090.00 IOPS, 15.98 MiB/s [2024-12-14T18:42:50.109Z] 4096.88 IOPS, 16.00 MiB/s [2024-12-14T18:42:51.487Z] 4100.22 IOPS, 16.02 MiB/s [2024-12-14T18:42:51.487Z] 4098.50 IOPS, 16.01 MiB/s 00:22:35.734 Latency(us) 00:22:35.734 [2024-12-14T18:42:51.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.734 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.734 Verification LBA range: start 0x0 length 0x2000 00:22:35.734 TLSTESTn1 : 10.02 4104.36 16.03 0.00 0.00 31136.39 5838.66 25141.99 00:22:35.734 [2024-12-14T18:42:51.487Z] =================================================================================================================== 00:22:35.734 [2024-12-14T18:42:51.487Z] Total : 4104.36 16.03 0.00 0.00 31136.39 5838.66 25141.99 00:22:35.734 { 00:22:35.734 "results": [ 00:22:35.734 { 00:22:35.734 "job": "TLSTESTn1", 00:22:35.734 "core_mask": "0x4", 00:22:35.734 "workload": "verify", 00:22:35.734 "status": "finished", 00:22:35.734 "verify_range": { 00:22:35.734 "start": 0, 00:22:35.734 "length": 8192 00:22:35.734 }, 00:22:35.734 "queue_depth": 128, 00:22:35.734 "io_size": 4096, 00:22:35.734 "runtime": 10.016655, 00:22:35.734 "iops": 4104.364181455785, 00:22:35.734 "mibps": 16.03267258381166, 00:22:35.734 "io_failed": 0, 00:22:35.734 "io_timeout": 0, 00:22:35.734 "avg_latency_us": 31136.38608997152, 00:22:35.734 "min_latency_us": 5838.6618181818185, 00:22:35.734 "max_latency_us": 25141.992727272725 00:22:35.734 } 00:22:35.734 ], 00:22:35.734 "core_count": 1 00:22:35.734 } 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101046 ']' 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:35.734 killing process with pid 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101046' 00:22:35.734 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.734 00:22:35.734 Latency(us) 00:22:35.734 [2024-12-14T18:42:51.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.734 [2024-12-14T18:42:51.487Z] =================================================================================================================== 00:22:35.734 [2024-12-14T18:42:51.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101046 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lNsBQVAT5Y 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lNsBQVAT5Y 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lNsBQVAT5Y 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lNsBQVAT5Y 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lNsBQVAT5Y 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101202 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101202 /var/tmp/bdevperf.sock 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101202 ']' 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.734 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.734 [2024-12-14 18:42:51.422073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:35.735 [2024-12-14 18:42:51.422204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101202 ] 00:22:35.994 [2024-12-14 18:42:51.564680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.994 [2024-12-14 18:42:51.654677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.930 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.930 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:36.930 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:37.189 [2024-12-14 18:42:52.725643] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lNsBQVAT5Y': 0100666 00:22:37.189 [2024-12-14 18:42:52.725703] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:37.189 2024/12/14 18:42:52 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.lNsBQVAT5Y], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:37.189 request: 00:22:37.189 { 00:22:37.189 "method": "keyring_file_add_key", 00:22:37.189 "params": { 00:22:37.189 "name": "key0", 00:22:37.189 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:37.189 } 00:22:37.189 } 00:22:37.189 Got JSON-RPC error response 00:22:37.189 GoRPCClient: error on JSON-RPC call 00:22:37.189 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.449 [2024-12-14 18:42:53.005864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.449 [2024-12-14 18:42:53.005935] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:37.449 2024/12/14 18:42:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:37.449 request: 00:22:37.449 { 00:22:37.449 "method": "bdev_nvme_attach_controller", 00:22:37.449 "params": { 00:22:37.449 "name": "TLSTEST", 00:22:37.449 "trtype": "tcp", 00:22:37.449 "traddr": "10.0.0.3", 00:22:37.449 "adrfam": "ipv4", 00:22:37.449 "trsvcid": "4420", 00:22:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.449 "prchk_reftag": false, 00:22:37.449 "prchk_guard": false, 00:22:37.449 "hdgst": false, 00:22:37.449 "ddgst": false, 00:22:37.449 "psk": "key0", 00:22:37.449 "allow_unrecognized_csi": false 00:22:37.449 } 00:22:37.449 } 00:22:37.449 Got JSON-RPC error response 00:22:37.449 GoRPCClient: error on JSON-RPC call 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 101202 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101202 ']' 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101202 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101202 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:37.449 killing process with pid 101202 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101202' 00:22:37.449 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.449 00:22:37.449 Latency(us) 00:22:37.449 [2024-12-14T18:42:53.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.449 [2024-12-14T18:42:53.202Z] =================================================================================================================== 00:22:37.449 [2024-12-14T18:42:53.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101202 00:22:37.449 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101202 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 100930 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100930 ']' 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100930 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100930 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.708 killing process with pid 100930 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100930' 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100930 00:22:37.708 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100930 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101266 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101266 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101266 ']' 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.967 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.967 [2024-12-14 18:42:53.677081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:37.967 [2024-12-14 18:42:53.677184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.226 [2024-12-14 18:42:53.818644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.226 [2024-12-14 18:42:53.889563] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.226 [2024-12-14 18:42:53.889869] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.226 [2024-12-14 18:42:53.889943] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.226 [2024-12-14 18:42:53.890042] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.226 [2024-12-14 18:42:53.890103] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.226 [2024-12-14 18:42:53.890215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:22:39.162 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lNsBQVAT5Y 00:22:39.163 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.421 [2024-12-14 18:42:54.946225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.421 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:39.679 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:39.679 [2024-12-14 18:42:55.398273] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.679 [2024-12-14 18:42:55.398697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:39.679 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.246 malloc0 00:22:40.246 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.505 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:40.764 [2024-12-14 18:42:56.296002] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lNsBQVAT5Y': 0100666 00:22:40.764 [2024-12-14 18:42:56.296418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:40.764 2024/12/14 18:42:56 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.lNsBQVAT5Y], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:40.764 request: 00:22:40.764 { 00:22:40.764 "method": "keyring_file_add_key", 00:22:40.764 "params": { 00:22:40.764 "name": "key0", 00:22:40.764 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:40.764 } 00:22:40.764 } 00:22:40.764 Got JSON-RPC error response 00:22:40.764 GoRPCClient: error on JSON-RPC call 00:22:40.764 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.023 [2024-12-14 18:42:56.596111] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:41.023 [2024-12-14 18:42:56.596423] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:41.023 2024/12/14 18:42:56 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:41.023 request: 00:22:41.023 { 00:22:41.023 "method": "nvmf_subsystem_add_host", 00:22:41.023 "params": { 00:22:41.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.023 "host": "nqn.2016-06.io.spdk:host1", 00:22:41.023 "psk": "key0" 00:22:41.023 } 00:22:41.023 } 00:22:41.023 Got JSON-RPC error response 00:22:41.023 GoRPCClient: error on JSON-RPC call 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 101266 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101266 ']' 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101266 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101266 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:41.023 killing process with pid 101266 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101266' 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101266 00:22:41.023 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101266 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lNsBQVAT5Y 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101391 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101391 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101391 ']' 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.282 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.282 [2024-12-14 18:42:57.009293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:41.282 [2024-12-14 18:42:57.009386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.541 [2024-12-14 18:42:57.146705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.541 [2024-12-14 18:42:57.221127] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.541 [2024-12-14 18:42:57.221192] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.541 [2024-12-14 18:42:57.221202] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.541 [2024-12-14 18:42:57.221209] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.541 [2024-12-14 18:42:57.221215] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.541 [2024-12-14 18:42:57.221244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lNsBQVAT5Y 00:22:41.799 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:42.058 [2024-12-14 18:42:57.720039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.058 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:42.317 18:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:42.576 [2024-12-14 18:42:58.324093] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.576 [2024-12-14 18:42:58.324322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:42.835 18:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:43.094 malloc0 00:22:43.094 18:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:43.352 18:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:43.611 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=101493 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 101493 /var/tmp/bdevperf.sock 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101493 ']' 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.870 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.870 [2024-12-14 18:42:59.496349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:43.870 [2024-12-14 18:42:59.496435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101493 ] 00:22:44.129 [2024-12-14 18:42:59.637244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.129 [2024-12-14 18:42:59.730595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.129 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.129 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:44.129 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:22:44.697 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.697 [2024-12-14 18:43:00.406516] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.956 TLSTESTn1 00:22:44.956 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:45.215 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:45.215 "subsystems": [ 00:22:45.215 { 00:22:45.215 "subsystem": "keyring", 00:22:45.215 "config": [ 00:22:45.215 { 00:22:45.215 "method": "keyring_file_add_key", 00:22:45.215 "params": { 00:22:45.215 "name": "key0", 00:22:45.215 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:45.215 } 00:22:45.215 } 00:22:45.215 ] 00:22:45.215 }, 00:22:45.215 { 00:22:45.215 "subsystem": "iobuf", 00:22:45.215 "config": [ 00:22:45.215 { 00:22:45.215 "method": "iobuf_set_options", 00:22:45.215 "params": { 00:22:45.215 "large_bufsize": 135168, 00:22:45.215 "large_pool_count": 1024, 00:22:45.215 "small_bufsize": 8192, 00:22:45.215 "small_pool_count": 8192 00:22:45.215 } 00:22:45.215 } 00:22:45.215 ] 00:22:45.215 }, 00:22:45.215 { 00:22:45.215 "subsystem": "sock", 00:22:45.215 "config": [ 00:22:45.215 { 00:22:45.215 "method": "sock_set_default_impl", 00:22:45.215 "params": { 00:22:45.215 "impl_name": "posix" 00:22:45.215 } 00:22:45.215 }, 00:22:45.215 { 00:22:45.215 "method": "sock_impl_set_options", 00:22:45.215 "params": { 00:22:45.215 "enable_ktls": false, 00:22:45.215 "enable_placement_id": 0, 00:22:45.215 "enable_quickack": false, 00:22:45.215 "enable_recv_pipe": true, 00:22:45.215 "enable_zerocopy_send_client": false, 00:22:45.215 "enable_zerocopy_send_server": true, 00:22:45.215 "impl_name": "ssl", 00:22:45.215 "recv_buf_size": 4096, 00:22:45.215 "send_buf_size": 4096, 00:22:45.215 "tls_version": 0, 00:22:45.215 "zerocopy_threshold": 0 00:22:45.215 } 00:22:45.215 }, 00:22:45.215 { 00:22:45.215 "method": "sock_impl_set_options", 00:22:45.215 "params": { 00:22:45.215 "enable_ktls": false, 00:22:45.215 "enable_placement_id": 0, 00:22:45.215 "enable_quickack": false, 00:22:45.215 "enable_recv_pipe": true, 00:22:45.215 "enable_zerocopy_send_client": false, 00:22:45.215 "enable_zerocopy_send_server": true, 00:22:45.215 "impl_name": "posix", 00:22:45.215 "recv_buf_size": 2097152, 00:22:45.215 "send_buf_size": 2097152, 00:22:45.215 "tls_version": 0, 00:22:45.215 "zerocopy_threshold": 0 00:22:45.215 } 00:22:45.215 } 00:22:45.215 ] 00:22:45.215 }, 00:22:45.215 { 00:22:45.215 "subsystem": "vmd", 00:22:45.215 "config": [] 00:22:45.215 }, 00:22:45.215 { 00:22:45.216 "subsystem": "accel", 00:22:45.216 "config": [ 00:22:45.216 { 00:22:45.216 "method": "accel_set_options", 00:22:45.216 "params": { 00:22:45.216 "buf_count": 2048, 00:22:45.216 "large_cache_size": 16, 00:22:45.216 "sequence_count": 2048, 00:22:45.216 "small_cache_size": 128, 00:22:45.216 "task_count": 2048 00:22:45.216 } 00:22:45.216 } 00:22:45.216 ] 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "subsystem": "bdev", 00:22:45.216 "config": [ 00:22:45.216 { 00:22:45.216 "method": "bdev_set_options", 00:22:45.216 "params": { 00:22:45.216 "bdev_auto_examine": true, 00:22:45.216 "bdev_io_cache_size": 256, 00:22:45.216 "bdev_io_pool_size": 65535, 00:22:45.216 "iobuf_large_cache_size": 16, 00:22:45.216 "iobuf_small_cache_size": 128 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_raid_set_options", 00:22:45.216 "params": { 00:22:45.216 "process_max_bandwidth_mb_sec": 0, 00:22:45.216 "process_window_size_kb": 1024 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_iscsi_set_options", 00:22:45.216 "params": { 00:22:45.216 "timeout_sec": 30 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_nvme_set_options", 00:22:45.216 "params": { 00:22:45.216 "action_on_timeout": "none", 00:22:45.216 "allow_accel_sequence": false, 00:22:45.216 "arbitration_burst": 0, 00:22:45.216 "bdev_retry_count": 3, 00:22:45.216 "ctrlr_loss_timeout_sec": 0, 00:22:45.216 "delay_cmd_submit": true, 00:22:45.216 "dhchap_dhgroups": [ 00:22:45.216 "null", 00:22:45.216 "ffdhe2048", 00:22:45.216 "ffdhe3072", 00:22:45.216 "ffdhe4096", 00:22:45.216 "ffdhe6144", 00:22:45.216 "ffdhe8192" 00:22:45.216 ], 00:22:45.216 "dhchap_digests": [ 00:22:45.216 "sha256", 00:22:45.216 "sha384", 00:22:45.216 "sha512" 00:22:45.216 ], 00:22:45.216 "disable_auto_failback": false, 00:22:45.216 "fast_io_fail_timeout_sec": 0, 00:22:45.216 "generate_uuids": false, 00:22:45.216 "high_priority_weight": 0, 00:22:45.216 "io_path_stat": false, 00:22:45.216 "io_queue_requests": 0, 00:22:45.216 "keep_alive_timeout_ms": 10000, 00:22:45.216 "low_priority_weight": 0, 00:22:45.216 "medium_priority_weight": 0, 00:22:45.216 "nvme_adminq_poll_period_us": 10000, 00:22:45.216 "nvme_error_stat": false, 00:22:45.216 "nvme_ioq_poll_period_us": 0, 00:22:45.216 "rdma_cm_event_timeout_ms": 0, 00:22:45.216 "rdma_max_cq_size": 0, 00:22:45.216 "rdma_srq_size": 0, 00:22:45.216 "reconnect_delay_sec": 0, 00:22:45.216 "timeout_admin_us": 0, 00:22:45.216 "timeout_us": 0, 00:22:45.216 "transport_ack_timeout": 0, 00:22:45.216 "transport_retry_count": 4, 00:22:45.216 "transport_tos": 0 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_nvme_set_hotplug", 00:22:45.216 "params": { 00:22:45.216 "enable": false, 00:22:45.216 "period_us": 100000 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_malloc_create", 00:22:45.216 "params": { 00:22:45.216 "block_size": 4096, 00:22:45.216 "dif_is_head_of_md": false, 00:22:45.216 "dif_pi_format": 0, 00:22:45.216 "dif_type": 0, 00:22:45.216 "md_size": 0, 00:22:45.216 "name": "malloc0", 00:22:45.216 "num_blocks": 8192, 00:22:45.216 "optimal_io_boundary": 0, 00:22:45.216 "physical_block_size": 4096, 00:22:45.216 "uuid": "e5a61429-3114-4be2-8c2a-d8c4c959a5b2" 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "bdev_wait_for_examine" 00:22:45.216 } 00:22:45.216 ] 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "subsystem": "nbd", 00:22:45.216 "config": [] 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "subsystem": "scheduler", 00:22:45.216 "config": [ 00:22:45.216 { 00:22:45.216 "method": "framework_set_scheduler", 00:22:45.216 "params": { 00:22:45.216 "name": "static" 00:22:45.216 } 00:22:45.216 } 00:22:45.216 ] 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "subsystem": "nvmf", 00:22:45.216 "config": [ 00:22:45.216 { 00:22:45.216 "method": "nvmf_set_config", 00:22:45.216 "params": { 00:22:45.216 "admin_cmd_passthru": { 00:22:45.216 "identify_ctrlr": false 00:22:45.216 }, 00:22:45.216 "dhchap_dhgroups": [ 00:22:45.216 "null", 00:22:45.216 "ffdhe2048", 00:22:45.216 "ffdhe3072", 00:22:45.216 "ffdhe4096", 00:22:45.216 "ffdhe6144", 00:22:45.216 "ffdhe8192" 00:22:45.216 ], 00:22:45.216 "dhchap_digests": [ 00:22:45.216 "sha256", 00:22:45.216 "sha384", 00:22:45.216 "sha512" 00:22:45.216 ], 00:22:45.216 "discovery_filter": "match_any" 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_set_max_subsystems", 00:22:45.216 "params": { 00:22:45.216 "max_subsystems": 1024 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_set_crdt", 00:22:45.216 "params": { 00:22:45.216 "crdt1": 0, 00:22:45.216 "crdt2": 0, 00:22:45.216 "crdt3": 0 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_create_transport", 00:22:45.216 "params": { 00:22:45.216 "abort_timeout_sec": 1, 00:22:45.216 "ack_timeout": 0, 00:22:45.216 "buf_cache_size": 4294967295, 00:22:45.216 "c2h_success": false, 00:22:45.216 "data_wr_pool_size": 0, 00:22:45.216 "dif_insert_or_strip": false, 00:22:45.216 "in_capsule_data_size": 4096, 00:22:45.216 "io_unit_size": 131072, 00:22:45.216 "max_aq_depth": 128, 00:22:45.216 "max_io_qpairs_per_ctrlr": 127, 00:22:45.216 "max_io_size": 131072, 00:22:45.216 "max_queue_depth": 128, 00:22:45.216 "num_shared_buffers": 511, 00:22:45.216 "sock_priority": 0, 00:22:45.216 "trtype": "TCP", 00:22:45.216 "zcopy": false 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_create_subsystem", 00:22:45.216 "params": { 00:22:45.216 "allow_any_host": false, 00:22:45.216 "ana_reporting": false, 00:22:45.216 "max_cntlid": 65519, 00:22:45.216 "max_namespaces": 10, 00:22:45.216 "min_cntlid": 1, 00:22:45.216 "model_number": "SPDK bdev Controller", 00:22:45.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.216 "serial_number": "SPDK00000000000001" 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_subsystem_add_host", 00:22:45.216 "params": { 00:22:45.216 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.216 "psk": "key0" 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_subsystem_add_ns", 00:22:45.216 "params": { 00:22:45.216 "namespace": { 00:22:45.216 "bdev_name": "malloc0", 00:22:45.216 "nguid": "E5A6142931144BE28C2AD8C4C959A5B2", 00:22:45.216 "no_auto_visible": false, 00:22:45.216 "nsid": 1, 00:22:45.216 "uuid": "e5a61429-3114-4be2-8c2a-d8c4c959a5b2" 00:22:45.216 }, 00:22:45.216 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:45.216 } 00:22:45.216 }, 00:22:45.216 { 00:22:45.216 "method": "nvmf_subsystem_add_listener", 00:22:45.216 "params": { 00:22:45.216 "listen_address": { 00:22:45.216 "adrfam": "IPv4", 00:22:45.216 "traddr": "10.0.0.3", 00:22:45.216 "trsvcid": "4420", 00:22:45.216 "trtype": "TCP" 00:22:45.216 }, 00:22:45.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.216 "secure_channel": true 00:22:45.216 } 00:22:45.216 } 00:22:45.216 ] 00:22:45.216 } 00:22:45.216 ] 00:22:45.216 }' 00:22:45.216 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:45.475 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:45.475 "subsystems": [ 00:22:45.475 { 00:22:45.475 "subsystem": "keyring", 00:22:45.475 "config": [ 00:22:45.475 { 00:22:45.475 "method": "keyring_file_add_key", 00:22:45.475 "params": { 00:22:45.476 "name": "key0", 00:22:45.476 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:45.476 } 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "iobuf", 00:22:45.476 "config": [ 00:22:45.476 { 00:22:45.476 "method": "iobuf_set_options", 00:22:45.476 "params": { 00:22:45.476 "large_bufsize": 135168, 00:22:45.476 "large_pool_count": 1024, 00:22:45.476 "small_bufsize": 8192, 00:22:45.476 "small_pool_count": 8192 00:22:45.476 } 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "sock", 00:22:45.476 "config": [ 00:22:45.476 { 00:22:45.476 "method": "sock_set_default_impl", 00:22:45.476 "params": { 00:22:45.476 "impl_name": "posix" 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "sock_impl_set_options", 00:22:45.476 "params": { 00:22:45.476 "enable_ktls": false, 00:22:45.476 "enable_placement_id": 0, 00:22:45.476 "enable_quickack": false, 00:22:45.476 "enable_recv_pipe": true, 00:22:45.476 "enable_zerocopy_send_client": false, 00:22:45.476 "enable_zerocopy_send_server": true, 00:22:45.476 "impl_name": "ssl", 00:22:45.476 "recv_buf_size": 4096, 00:22:45.476 "send_buf_size": 4096, 00:22:45.476 "tls_version": 0, 00:22:45.476 "zerocopy_threshold": 0 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "sock_impl_set_options", 00:22:45.476 "params": { 00:22:45.476 "enable_ktls": false, 00:22:45.476 "enable_placement_id": 0, 00:22:45.476 "enable_quickack": false, 00:22:45.476 "enable_recv_pipe": true, 00:22:45.476 "enable_zerocopy_send_client": false, 00:22:45.476 "enable_zerocopy_send_server": true, 00:22:45.476 "impl_name": "posix", 00:22:45.476 "recv_buf_size": 2097152, 00:22:45.476 "send_buf_size": 2097152, 00:22:45.476 "tls_version": 0, 00:22:45.476 "zerocopy_threshold": 0 00:22:45.476 } 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "vmd", 00:22:45.476 "config": [] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "accel", 00:22:45.476 "config": [ 00:22:45.476 { 00:22:45.476 "method": "accel_set_options", 00:22:45.476 "params": { 00:22:45.476 "buf_count": 2048, 00:22:45.476 "large_cache_size": 16, 00:22:45.476 "sequence_count": 2048, 00:22:45.476 "small_cache_size": 128, 00:22:45.476 "task_count": 2048 00:22:45.476 } 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "bdev", 00:22:45.476 "config": [ 00:22:45.476 { 00:22:45.476 "method": "bdev_set_options", 00:22:45.476 "params": { 00:22:45.476 "bdev_auto_examine": true, 00:22:45.476 "bdev_io_cache_size": 256, 00:22:45.476 "bdev_io_pool_size": 65535, 00:22:45.476 "iobuf_large_cache_size": 16, 00:22:45.476 "iobuf_small_cache_size": 128 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_raid_set_options", 00:22:45.476 "params": { 00:22:45.476 "process_max_bandwidth_mb_sec": 0, 00:22:45.476 "process_window_size_kb": 1024 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_iscsi_set_options", 00:22:45.476 "params": { 00:22:45.476 "timeout_sec": 30 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_nvme_set_options", 00:22:45.476 "params": { 00:22:45.476 "action_on_timeout": "none", 00:22:45.476 "allow_accel_sequence": false, 00:22:45.476 "arbitration_burst": 0, 00:22:45.476 "bdev_retry_count": 3, 00:22:45.476 "ctrlr_loss_timeout_sec": 0, 00:22:45.476 "delay_cmd_submit": true, 00:22:45.476 "dhchap_dhgroups": [ 00:22:45.476 "null", 00:22:45.476 "ffdhe2048", 00:22:45.476 "ffdhe3072", 00:22:45.476 "ffdhe4096", 00:22:45.476 "ffdhe6144", 00:22:45.476 "ffdhe8192" 00:22:45.476 ], 00:22:45.476 "dhchap_digests": [ 00:22:45.476 "sha256", 00:22:45.476 "sha384", 00:22:45.476 "sha512" 00:22:45.476 ], 00:22:45.476 "disable_auto_failback": false, 00:22:45.476 "fast_io_fail_timeout_sec": 0, 00:22:45.476 "generate_uuids": false, 00:22:45.476 "high_priority_weight": 0, 00:22:45.476 "io_path_stat": false, 00:22:45.476 "io_queue_requests": 512, 00:22:45.476 "keep_alive_timeout_ms": 10000, 00:22:45.476 "low_priority_weight": 0, 00:22:45.476 "medium_priority_weight": 0, 00:22:45.476 "nvme_adminq_poll_period_us": 10000, 00:22:45.476 "nvme_error_stat": false, 00:22:45.476 "nvme_ioq_poll_period_us": 0, 00:22:45.476 "rdma_cm_event_timeout_ms": 0, 00:22:45.476 "rdma_max_cq_size": 0, 00:22:45.476 "rdma_srq_size": 0, 00:22:45.476 "reconnect_delay_sec": 0, 00:22:45.476 "timeout_admin_us": 0, 00:22:45.476 "timeout_us": 0, 00:22:45.476 "transport_ack_timeout": 0, 00:22:45.476 "transport_retry_count": 4, 00:22:45.476 "transport_tos": 0 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_nvme_attach_controller", 00:22:45.476 "params": { 00:22:45.476 "adrfam": "IPv4", 00:22:45.476 "ctrlr_loss_timeout_sec": 0, 00:22:45.476 "ddgst": false, 00:22:45.476 "fast_io_fail_timeout_sec": 0, 00:22:45.476 "hdgst": false, 00:22:45.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.476 "name": "TLSTEST", 00:22:45.476 "prchk_guard": false, 00:22:45.476 "prchk_reftag": false, 00:22:45.476 "psk": "key0", 00:22:45.476 "reconnect_delay_sec": 0, 00:22:45.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.476 "traddr": "10.0.0.3", 00:22:45.476 "trsvcid": "4420", 00:22:45.476 "trtype": "TCP" 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_nvme_set_hotplug", 00:22:45.476 "params": { 00:22:45.476 "enable": false, 00:22:45.476 "period_us": 100000 00:22:45.476 } 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "method": "bdev_wait_for_examine" 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }, 00:22:45.476 { 00:22:45.476 "subsystem": "nbd", 00:22:45.476 "config": [] 00:22:45.476 } 00:22:45.476 ] 00:22:45.476 }' 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 101493 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101493 ']' 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101493 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.476 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101493 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:45.735 killing process with pid 101493 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101493' 00:22:45.735 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.735 00:22:45.735 Latency(us) 00:22:45.735 [2024-12-14T18:43:01.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.735 [2024-12-14T18:43:01.488Z] =================================================================================================================== 00:22:45.735 [2024-12-14T18:43:01.488Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101493 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101493 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 101391 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101391 ']' 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101391 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.735 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101391 00:22:45.993 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.993 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.993 killing process with pid 101391 00:22:45.993 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101391' 00:22:45.993 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101391 00:22:45.993 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101391 00:22:46.253 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:46.253 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:46.253 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.253 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:46.253 "subsystems": [ 00:22:46.253 { 00:22:46.253 "subsystem": "keyring", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "keyring_file_add_key", 00:22:46.253 "params": { 00:22:46.253 "name": "key0", 00:22:46.253 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:46.253 } 00:22:46.253 } 00:22:46.253 ] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "iobuf", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "iobuf_set_options", 00:22:46.253 "params": { 00:22:46.253 "large_bufsize": 135168, 00:22:46.253 "large_pool_count": 1024, 00:22:46.253 "small_bufsize": 8192, 00:22:46.253 "small_pool_count": 8192 00:22:46.253 } 00:22:46.253 } 00:22:46.253 ] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "sock", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "sock_set_default_impl", 00:22:46.253 "params": { 00:22:46.253 "impl_name": "posix" 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "sock_impl_set_options", 00:22:46.253 "params": { 00:22:46.253 "enable_ktls": false, 00:22:46.253 "enable_placement_id": 0, 00:22:46.253 "enable_quickack": false, 00:22:46.253 "enable_recv_pipe": true, 00:22:46.253 "enable_zerocopy_send_client": false, 00:22:46.253 "enable_zerocopy_send_server": true, 00:22:46.253 "impl_name": "ssl", 00:22:46.253 "recv_buf_size": 4096, 00:22:46.253 "send_buf_size": 4096, 00:22:46.253 "tls_version": 0, 00:22:46.253 "zerocopy_threshold": 0 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "sock_impl_set_options", 00:22:46.253 "params": { 00:22:46.253 "enable_ktls": false, 00:22:46.253 "enable_placement_id": 0, 00:22:46.253 "enable_quickack": false, 00:22:46.253 "enable_recv_pipe": true, 00:22:46.253 "enable_zerocopy_send_client": false, 00:22:46.253 "enable_zerocopy_send_server": true, 00:22:46.253 "impl_name": "posix", 00:22:46.253 "recv_buf_size": 2097152, 00:22:46.253 "send_buf_size": 2097152, 00:22:46.253 "tls_version": 0, 00:22:46.253 "zerocopy_threshold": 0 00:22:46.253 } 00:22:46.253 } 00:22:46.253 ] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "vmd", 00:22:46.253 "config": [] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "accel", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "accel_set_options", 00:22:46.253 "params": { 00:22:46.253 "buf_count": 2048, 00:22:46.253 "large_cache_size": 16, 00:22:46.253 "sequence_count": 2048, 00:22:46.253 "small_cache_size": 128, 00:22:46.253 "task_count": 2048 00:22:46.253 } 00:22:46.253 } 00:22:46.253 ] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "bdev", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "bdev_set_options", 00:22:46.253 "params": { 00:22:46.253 "bdev_auto_examine": true, 00:22:46.253 "bdev_io_cache_size": 256, 00:22:46.253 "bdev_io_pool_size": 65535, 00:22:46.253 "iobuf_large_cache_size": 16, 00:22:46.253 "iobuf_small_cache_size": 128 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_raid_set_options", 00:22:46.253 "params": { 00:22:46.253 "process_max_bandwidth_mb_sec": 0, 00:22:46.253 "process_window_size_kb": 1024 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_iscsi_set_options", 00:22:46.253 "params": { 00:22:46.253 "timeout_sec": 30 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_nvme_set_options", 00:22:46.253 "params": { 00:22:46.253 "action_on_timeout": "none", 00:22:46.253 "allow_accel_sequence": false, 00:22:46.253 "arbitration_burst": 0, 00:22:46.253 "bdev_retry_count": 3, 00:22:46.253 "ctrlr_loss_timeout_sec": 0, 00:22:46.253 "delay_cmd_submit": true, 00:22:46.253 "dhchap_dhgroups": [ 00:22:46.253 "null", 00:22:46.253 "ffdhe2048", 00:22:46.253 "ffdhe3072", 00:22:46.253 "ffdhe4096", 00:22:46.253 "ffdhe6144", 00:22:46.253 "ffdhe8192" 00:22:46.253 ], 00:22:46.253 "dhchap_digests": [ 00:22:46.253 "sha256", 00:22:46.253 "sha384", 00:22:46.253 "sha512" 00:22:46.253 ], 00:22:46.253 "disable_auto_failback": false, 00:22:46.253 "fast_io_fail_timeout_sec": 0, 00:22:46.253 "generate_uuids": false, 00:22:46.253 "high_priority_weight": 0, 00:22:46.253 "io_path_stat": false, 00:22:46.253 "io_queue_requests": 0, 00:22:46.253 "keep_alive_timeout_ms": 10000, 00:22:46.253 "low_priority_weight": 0, 00:22:46.253 "medium_priority_weight": 0, 00:22:46.253 "nvme_adminq_poll_period_us": 10000, 00:22:46.253 "nvme_error_stat": false, 00:22:46.253 "nvme_ioq_poll_period_us": 0, 00:22:46.253 "rdma_cm_event_timeout_ms": 0, 00:22:46.253 "rdma_max_cq_size": 0, 00:22:46.253 "rdma_srq_size": 0, 00:22:46.253 "reconnect_delay_sec": 0, 00:22:46.253 "timeout_admin_us": 0, 00:22:46.253 "timeout_us": 0, 00:22:46.253 "transport_ack_timeout": 0, 00:22:46.253 "transport_retry_count": 4, 00:22:46.253 "transport_tos": 0 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_nvme_set_hotplug", 00:22:46.253 "params": { 00:22:46.253 "enable": false, 00:22:46.253 "period_us": 100000 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_malloc_create", 00:22:46.253 "params": { 00:22:46.253 "block_size": 4096, 00:22:46.253 "dif_is_head_of_md": false, 00:22:46.253 "dif_pi_format": 0, 00:22:46.253 "dif_type": 0, 00:22:46.253 "md_size": 0, 00:22:46.253 "name": "malloc0", 00:22:46.253 "num_blocks": 8192, 00:22:46.253 "optimal_io_boundary": 0, 00:22:46.253 "physical_block_size": 4096, 00:22:46.253 "uuid": "e5a61429-3114-4be2-8c2a-d8c4c959a5b2" 00:22:46.253 } 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "method": "bdev_wait_for_examine" 00:22:46.253 } 00:22:46.253 ] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "nbd", 00:22:46.253 "config": [] 00:22:46.253 }, 00:22:46.253 { 00:22:46.253 "subsystem": "scheduler", 00:22:46.253 "config": [ 00:22:46.253 { 00:22:46.253 "method": "framework_set_scheduler", 00:22:46.253 "params": { 00:22:46.253 "name": "static" 00:22:46.253 } 00:22:46.253 } 00:22:46.253 ] 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "subsystem": "nvmf", 00:22:46.254 "config": [ 00:22:46.254 { 00:22:46.254 "method": "nvmf_set_config", 00:22:46.254 "params": { 00:22:46.254 "admin_cmd_passthru": { 00:22:46.254 "identify_ctrlr": false 00:22:46.254 }, 00:22:46.254 "dhchap_dhgroups": [ 00:22:46.254 "null", 00:22:46.254 "ffdhe2048", 00:22:46.254 "ffdhe3072", 00:22:46.254 "ffdhe4096", 00:22:46.254 "ffdhe6144", 00:22:46.254 "ffdhe8192" 00:22:46.254 ], 00:22:46.254 "dhchap_digests": [ 00:22:46.254 "sha256", 00:22:46.254 "sha384", 00:22:46.254 "sha512" 00:22:46.254 ], 00:22:46.254 "discovery_filter": "match_any" 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_set_max_subsystems", 00:22:46.254 "params": { 00:22:46.254 "max_subsystems": 1024 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_set_crdt", 00:22:46.254 "params": { 00:22:46.254 "crdt1": 0, 00:22:46.254 "crdt2": 0, 00:22:46.254 "crdt3": 0 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_create_transport", 00:22:46.254 "params": { 00:22:46.254 "abort_timeout_sec": 1, 00:22:46.254 "ack_timeout": 0, 00:22:46.254 "buf_cache_size": 4294967295, 00:22:46.254 "c2h_success": false, 00:22:46.254 "data_wr_pool_size": 0, 00:22:46.254 "dif_insert_or_strip": false, 00:22:46.254 "in_capsule_data_size": 4096, 00:22:46.254 "io_unit_size": 131072, 00:22:46.254 "max_aq_depth": 128, 00:22:46.254 "max_io_qpairs_per_ctrlr": 127, 00:22:46.254 "max_io_size": 131072, 00:22:46.254 "max_queue_depth": 128, 00:22:46.254 "num_shared_buffers": 511, 00:22:46.254 "sock_priority": 0, 00:22:46.254 "trtype": "TCP", 00:22:46.254 "zcopy": false 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_create_subsystem", 00:22:46.254 "params": { 00:22:46.254 "allow_any_host": false, 00:22:46.254 "ana_reporting": false, 00:22:46.254 "max_cntlid": 65519, 00:22:46.254 "max_namespaces": 10, 00:22:46.254 "min_cntlid": 1, 00:22:46.254 "model_number": "SPDK bdev Controller", 00:22:46.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.254 "serial_number": "SPDK00000000000001" 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_subsystem_add_host", 00:22:46.254 "params": { 00:22:46.254 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.254 "psk": "key0" 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_subsystem_add_ns", 00:22:46.254 "params": { 00:22:46.254 "namespace": { 00:22:46.254 "bdev_name": "malloc0", 00:22:46.254 "nguid": "E5A6142931144BE28C2AD8C4C959A5B2", 00:22:46.254 "no_auto_visible": false, 00:22:46.254 "nsid": 1, 00:22:46.254 "uuid": "e5a61429-3114-4be2-8c2a-d8c4c959a5b2" 00:22:46.254 }, 00:22:46.254 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:46.254 } 00:22:46.254 }, 00:22:46.254 { 00:22:46.254 "method": "nvmf_subsystem_add_listener", 00:22:46.254 "params": { 00:22:46.254 "listen_address": { 00:22:46.254 "adrfam": "IPv4", 00:22:46.254 "traddr": "10.0.0.3", 00:22:46.254 "trsvcid": "4420", 00:22:46.254 "trtype": "TCP" 00:22:46.254 }, 00:22:46.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.254 "secure_channel": true 00:22:46.254 } 00:22:46.254 } 00:22:46.254 ] 00:22:46.254 } 00:22:46.254 ] 00:22:46.254 }' 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101565 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101565 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101565 ']' 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.254 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.254 [2024-12-14 18:43:01.844397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:46.254 [2024-12-14 18:43:01.844473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.254 [2024-12-14 18:43:01.969438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.513 [2024-12-14 18:43:02.043585] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.513 [2024-12-14 18:43:02.043669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.513 [2024-12-14 18:43:02.043679] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.513 [2024-12-14 18:43:02.043686] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.513 [2024-12-14 18:43:02.043692] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.513 [2024-12-14 18:43:02.043771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.772 [2024-12-14 18:43:02.306275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.772 [2024-12-14 18:43:02.349517] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.772 [2024-12-14 18:43:02.349769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=101609 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 101609 /var/tmp/bdevperf.sock 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101609 ']' 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.340 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:47.340 "subsystems": [ 00:22:47.340 { 00:22:47.340 "subsystem": "keyring", 00:22:47.340 "config": [ 00:22:47.340 { 00:22:47.340 "method": "keyring_file_add_key", 00:22:47.340 "params": { 00:22:47.340 "name": "key0", 00:22:47.340 "path": "/tmp/tmp.lNsBQVAT5Y" 00:22:47.340 } 00:22:47.340 } 00:22:47.340 ] 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "subsystem": "iobuf", 00:22:47.340 "config": [ 00:22:47.340 { 00:22:47.340 "method": "iobuf_set_options", 00:22:47.340 "params": { 00:22:47.340 "large_bufsize": 135168, 00:22:47.340 "large_pool_count": 1024, 00:22:47.340 "small_bufsize": 8192, 00:22:47.340 "small_pool_count": 8192 00:22:47.340 } 00:22:47.340 } 00:22:47.340 ] 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "subsystem": "sock", 00:22:47.340 "config": [ 00:22:47.340 { 00:22:47.340 "method": "sock_set_default_impl", 00:22:47.340 "params": { 00:22:47.340 "impl_name": "posix" 00:22:47.340 } 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "method": "sock_impl_set_options", 00:22:47.340 "params": { 00:22:47.340 "enable_ktls": false, 00:22:47.340 "enable_placement_id": 0, 00:22:47.340 "enable_quickack": false, 00:22:47.340 "enable_recv_pipe": true, 00:22:47.340 "enable_zerocopy_send_client": false, 00:22:47.340 "enable_zerocopy_send_server": true, 00:22:47.340 "impl_name": "ssl", 00:22:47.340 "recv_buf_size": 4096, 00:22:47.340 "send_buf_size": 4096, 00:22:47.340 "tls_version": 0, 00:22:47.340 "zerocopy_threshold": 0 00:22:47.340 } 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "method": "sock_impl_set_options", 00:22:47.340 "params": { 00:22:47.340 "enable_ktls": false, 00:22:47.340 "enable_placement_id": 0, 00:22:47.340 "enable_quickack": false, 00:22:47.340 "enable_recv_pipe": true, 00:22:47.340 "enable_zerocopy_send_client": false, 00:22:47.340 "enable_zerocopy_send_server": true, 00:22:47.340 "impl_name": "posix", 00:22:47.340 "recv_buf_size": 2097152, 00:22:47.340 "send_buf_size": 2097152, 00:22:47.340 "tls_version": 0, 00:22:47.340 "zerocopy_threshold": 0 00:22:47.340 } 00:22:47.340 } 00:22:47.340 ] 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "subsystem": "vmd", 00:22:47.340 "config": [] 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "subsystem": "accel", 00:22:47.340 "config": [ 00:22:47.340 { 00:22:47.340 "method": "accel_set_options", 00:22:47.340 "params": { 00:22:47.340 "buf_count": 2048, 00:22:47.340 "large_cache_size": 16, 00:22:47.340 "sequence_count": 2048, 00:22:47.340 "small_cache_size": 128, 00:22:47.340 "task_count": 2048 00:22:47.340 } 00:22:47.340 } 00:22:47.340 ] 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "subsystem": "bdev", 00:22:47.340 "config": [ 00:22:47.340 { 00:22:47.340 "method": "bdev_set_options", 00:22:47.340 "params": { 00:22:47.340 "bdev_auto_examine": true, 00:22:47.340 "bdev_io_cache_size": 256, 00:22:47.340 "bdev_io_pool_size": 65535, 00:22:47.340 "iobuf_large_cache_size": 16, 00:22:47.340 "iobuf_small_cache_size": 128 00:22:47.340 } 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "method": "bdev_raid_set_options", 00:22:47.340 "params": { 00:22:47.340 "process_max_bandwidth_mb_sec": 0, 00:22:47.340 "process_window_size_kb": 1024 00:22:47.340 } 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "method": "bdev_iscsi_set_options", 00:22:47.340 "params": { 00:22:47.340 "timeout_sec": 30 00:22:47.340 } 00:22:47.340 }, 00:22:47.340 { 00:22:47.340 "method": "bdev_nvme_set_options", 00:22:47.340 "params": { 00:22:47.340 "action_on_timeout": "none", 00:22:47.340 "allow_accel_sequence": false, 00:22:47.340 "arbitration_burst": 0, 00:22:47.341 "bdev_retry_count": 3, 00:22:47.341 "ctrlr_loss_timeout_sec": 0, 00:22:47.341 "delay_cmd_submit": true, 00:22:47.341 "dhchap_dhgroups": [ 00:22:47.341 "null", 00:22:47.341 "ffdhe2048", 00:22:47.341 "ffdhe3072", 00:22:47.341 "ffdhe4096", 00:22:47.341 "ffdhe6144", 00:22:47.341 "ffdhe8192" 00:22:47.341 ], 00:22:47.341 "dhchap_digests": [ 00:22:47.341 "sha256", 00:22:47.341 "sha384", 00:22:47.341 "sha512" 00:22:47.341 ], 00:22:47.341 "disable_auto_failback": false, 00:22:47.341 "fast_io_fail_timeout_sec": 0, 00:22:47.341 "generate_uuids": false, 00:22:47.341 "high_priority_weight": 0, 00:22:47.341 "io_path_stat": false, 00:22:47.341 "io_queue_requests": 512, 00:22:47.341 "keep_alive_timeout_ms": 10000, 00:22:47.341 "low_priority_weight": 0, 00:22:47.341 "medium_priority_weight": 0, 00:22:47.341 "nvme_adminq_poll_period_us": 10000, 00:22:47.341 "nvme_error_stat": false, 00:22:47.341 "nvme_ioq_poll_period_us": 0, 00:22:47.341 "rdma_cm_event_timeout_ms": 0, 00:22:47.341 "rdma_max_cq_size": 0, 00:22:47.341 "rdma_srq_size": 0, 00:22:47.341 "reconnect_delay_sec": 0, 00:22:47.341 "timeout_admin_us": 0, 00:22:47.341 "timeout_us": 0, 00:22:47.341 "transport_ack_timeout": 0, 00:22:47.341 "transport_retry_count": 4, 00:22:47.341 "transport_tos": 0 00:22:47.341 } 00:22:47.341 }, 00:22:47.341 { 00:22:47.341 "method": "bdev_nvme_attach_controller", 00:22:47.341 "params": { 00:22:47.341 "adrfam": "IPv4", 00:22:47.341 "ctrlr_loss_timeout_sec": 0, 00:22:47.341 "ddgst": false, 00:22:47.341 "fast_io_fail_timeout_sec": 0, 00:22:47.341 "hdgst": false, 00:22:47.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.341 "name": "TLSTEST", 00:22:47.341 "prchk_guard": false, 00:22:47.341 "prchk_reftag": false, 00:22:47.341 "psk": "key0", 00:22:47.341 "reconnect_delay_sec": 0, 00:22:47.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.341 "traddr": "10.0.0.3", 00:22:47.341 "trsvcid": "4420", 00:22:47.341 "trtype": "TCP" 00:22:47.341 } 00:22:47.341 }, 00:22:47.341 { 00:22:47.341 "method": "bdev_nvme_set_hotplug", 00:22:47.341 "params": { 00:22:47.341 "enable": false, 00:22:47.341 "period_us": 100000 00:22:47.341 } 00:22:47.341 }, 00:22:47.341 { 00:22:47.341 "method": "bdev_wait_for_examine" 00:22:47.341 } 00:22:47.341 ] 00:22:47.341 }, 00:22:47.341 { 00:22:47.341 "subsystem": "nbd", 00:22:47.341 "config": [] 00:22:47.341 } 00:22:47.341 ] 00:22:47.341 }' 00:22:47.341 [2024-12-14 18:43:02.941849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:47.341 [2024-12-14 18:43:02.941958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101609 ] 00:22:47.341 [2024-12-14 18:43:03.082428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.599 [2024-12-14 18:43:03.159880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.599 [2024-12-14 18:43:03.342505] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.535 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.535 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:48.535 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:48.535 Running I/O for 10 seconds... 00:22:50.412 4043.00 IOPS, 15.79 MiB/s [2024-12-14T18:43:07.542Z] 4032.00 IOPS, 15.75 MiB/s [2024-12-14T18:43:08.109Z] 4018.33 IOPS, 15.70 MiB/s [2024-12-14T18:43:09.485Z] 4021.25 IOPS, 15.71 MiB/s [2024-12-14T18:43:10.422Z] 4017.40 IOPS, 15.69 MiB/s [2024-12-14T18:43:11.359Z] 4011.67 IOPS, 15.67 MiB/s [2024-12-14T18:43:12.295Z] 4044.43 IOPS, 15.80 MiB/s [2024-12-14T18:43:13.231Z] 4078.75 IOPS, 15.93 MiB/s [2024-12-14T18:43:14.168Z] 4112.56 IOPS, 16.06 MiB/s [2024-12-14T18:43:14.168Z] 4133.10 IOPS, 16.14 MiB/s 00:22:58.415 Latency(us) 00:22:58.415 [2024-12-14T18:43:14.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.415 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.415 Verification LBA range: start 0x0 length 0x2000 00:22:58.415 TLSTESTn1 : 10.01 4139.62 16.17 0.00 0.00 30868.80 4200.26 27048.49 00:22:58.415 [2024-12-14T18:43:14.168Z] =================================================================================================================== 00:22:58.415 [2024-12-14T18:43:14.168Z] Total : 4139.62 16.17 0.00 0.00 30868.80 4200.26 27048.49 00:22:58.415 { 00:22:58.415 "results": [ 00:22:58.416 { 00:22:58.416 "job": "TLSTESTn1", 00:22:58.416 "core_mask": "0x4", 00:22:58.416 "workload": "verify", 00:22:58.416 "status": "finished", 00:22:58.416 "verify_range": { 00:22:58.416 "start": 0, 00:22:58.416 "length": 8192 00:22:58.416 }, 00:22:58.416 "queue_depth": 128, 00:22:58.416 "io_size": 4096, 00:22:58.416 "runtime": 10.014696, 00:22:58.416 "iops": 4139.616419709595, 00:22:58.416 "mibps": 16.170376639490605, 00:22:58.416 "io_failed": 0, 00:22:58.416 "io_timeout": 0, 00:22:58.416 "avg_latency_us": 30868.80371977975, 00:22:58.416 "min_latency_us": 4200.261818181818, 00:22:58.416 "max_latency_us": 27048.494545454545 00:22:58.416 } 00:22:58.416 ], 00:22:58.416 "core_count": 1 00:22:58.416 } 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 101609 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101609 ']' 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101609 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.416 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101609 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:58.675 killing process with pid 101609 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101609' 00:22:58.675 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.675 00:22:58.675 Latency(us) 00:22:58.675 [2024-12-14T18:43:14.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.675 [2024-12-14T18:43:14.428Z] =================================================================================================================== 00:22:58.675 [2024-12-14T18:43:14.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101609 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101609 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 101565 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101565 ']' 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101565 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.675 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101565 00:22:58.934 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:58.934 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:58.934 killing process with pid 101565 00:22:58.934 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101565' 00:22:58.934 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101565 00:22:58.934 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101565 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101757 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101757 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101757 ']' 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.193 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.193 [2024-12-14 18:43:14.783494] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:59.193 [2024-12-14 18:43:14.783616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.193 [2024-12-14 18:43:14.926630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.452 [2024-12-14 18:43:15.008017] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.452 [2024-12-14 18:43:15.008097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.452 [2024-12-14 18:43:15.008112] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.452 [2024-12-14 18:43:15.008123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.452 [2024-12-14 18:43:15.008133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.452 [2024-12-14 18:43:15.008183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lNsBQVAT5Y 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lNsBQVAT5Y 00:23:00.388 18:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.388 [2024-12-14 18:43:16.093657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.388 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:00.649 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:01.259 [2024-12-14 18:43:16.677758] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.259 [2024-12-14 18:43:16.677996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:01.259 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.259 malloc0 00:23:01.528 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.787 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:23:02.045 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=101872 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 101872 /var/tmp/bdevperf.sock 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101872 ']' 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.304 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.304 [2024-12-14 18:43:17.954367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:02.304 [2024-12-14 18:43:17.954476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101872 ] 00:23:02.563 [2024-12-14 18:43:18.097412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.563 [2024-12-14 18:43:18.183374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.499 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.499 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.499 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:23:03.499 18:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:03.758 [2024-12-14 18:43:19.463434] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.016 nvme0n1 00:23:04.016 18:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.016 Running I/O for 1 seconds... 00:23:04.953 4493.00 IOPS, 17.55 MiB/s 00:23:04.953 Latency(us) 00:23:04.953 [2024-12-14T18:43:20.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.953 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:04.953 Verification LBA range: start 0x0 length 0x2000 00:23:04.953 nvme0n1 : 1.02 4547.97 17.77 0.00 0.00 27847.21 1206.46 18707.55 00:23:04.953 [2024-12-14T18:43:20.706Z] =================================================================================================================== 00:23:04.953 [2024-12-14T18:43:20.706Z] Total : 4547.97 17.77 0.00 0.00 27847.21 1206.46 18707.55 00:23:04.953 { 00:23:04.953 "results": [ 00:23:04.953 { 00:23:04.953 "job": "nvme0n1", 00:23:04.953 "core_mask": "0x2", 00:23:04.953 "workload": "verify", 00:23:04.953 "status": "finished", 00:23:04.953 "verify_range": { 00:23:04.953 "start": 0, 00:23:04.953 "length": 8192 00:23:04.953 }, 00:23:04.953 "queue_depth": 128, 00:23:04.953 "io_size": 4096, 00:23:04.953 "runtime": 1.016277, 00:23:04.953 "iops": 4547.972649189148, 00:23:04.953 "mibps": 17.76551816089511, 00:23:04.953 "io_failed": 0, 00:23:04.953 "io_timeout": 0, 00:23:04.953 "avg_latency_us": 27847.210517288855, 00:23:04.953 "min_latency_us": 1206.4581818181819, 00:23:04.953 "max_latency_us": 18707.54909090909 00:23:04.953 } 00:23:04.953 ], 00:23:04.953 "core_count": 1 00:23:04.953 } 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 101872 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101872 ']' 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101872 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.211 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101872 00:23:05.211 killing process with pid 101872 00:23:05.211 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.211 00:23:05.211 Latency(us) 00:23:05.211 [2024-12-14T18:43:20.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.211 [2024-12-14T18:43:20.964Z] =================================================================================================================== 00:23:05.211 [2024-12-14T18:43:20.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.212 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:05.212 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:05.212 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101872' 00:23:05.212 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101872 00:23:05.212 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101872 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 101757 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101757 ']' 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101757 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101757 00:23:05.470 killing process with pid 101757 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101757' 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101757 00:23:05.470 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101757 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101947 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101947 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101947 ']' 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.729 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.730 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.730 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.730 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.730 [2024-12-14 18:43:21.422733] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:05.730 [2024-12-14 18:43:21.422917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.988 [2024-12-14 18:43:21.566785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.988 [2024-12-14 18:43:21.641867] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.988 [2024-12-14 18:43:21.641929] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.988 [2024-12-14 18:43:21.641940] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.988 [2024-12-14 18:43:21.641947] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.988 [2024-12-14 18:43:21.641953] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.988 [2024-12-14 18:43:21.641980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.247 [2024-12-14 18:43:21.841896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.247 malloc0 00:23:06.247 [2024-12-14 18:43:21.875687] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.247 [2024-12-14 18:43:21.875944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:06.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=101984 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 101984 /var/tmp/bdevperf.sock 00:23:06.247 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101984 ']' 00:23:06.248 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.248 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.248 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.248 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.248 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.248 [2024-12-14 18:43:21.964056] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:06.248 [2024-12-14 18:43:21.964381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101984 ] 00:23:06.507 [2024-12-14 18:43:22.105937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.507 [2024-12-14 18:43:22.189313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.442 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.442 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:07.442 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lNsBQVAT5Y 00:23:07.701 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:07.960 [2024-12-14 18:43:23.515789] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.960 nvme0n1 00:23:07.960 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.218 Running I/O for 1 seconds... 00:23:09.155 4055.00 IOPS, 15.84 MiB/s 00:23:09.155 Latency(us) 00:23:09.155 [2024-12-14T18:43:24.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.155 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:09.155 Verification LBA range: start 0x0 length 0x2000 00:23:09.155 nvme0n1 : 1.01 4129.00 16.13 0.00 0.00 30797.21 4081.11 32887.16 00:23:09.155 [2024-12-14T18:43:24.908Z] =================================================================================================================== 00:23:09.155 [2024-12-14T18:43:24.908Z] Total : 4129.00 16.13 0.00 0.00 30797.21 4081.11 32887.16 00:23:09.155 { 00:23:09.155 "results": [ 00:23:09.155 { 00:23:09.155 "job": "nvme0n1", 00:23:09.155 "core_mask": "0x2", 00:23:09.155 "workload": "verify", 00:23:09.155 "status": "finished", 00:23:09.155 "verify_range": { 00:23:09.155 "start": 0, 00:23:09.155 "length": 8192 00:23:09.155 }, 00:23:09.155 "queue_depth": 128, 00:23:09.155 "io_size": 4096, 00:23:09.155 "runtime": 1.013078, 00:23:09.155 "iops": 4129.000925891195, 00:23:09.155 "mibps": 16.12890986676248, 00:23:09.155 "io_failed": 0, 00:23:09.155 "io_timeout": 0, 00:23:09.155 "avg_latency_us": 30797.20915045748, 00:23:09.155 "min_latency_us": 4081.1054545454544, 00:23:09.155 "max_latency_us": 32887.156363636364 00:23:09.155 } 00:23:09.155 ], 00:23:09.155 "core_count": 1 00:23:09.155 } 00:23:09.155 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:09.155 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.155 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.414 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.414 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:09.414 "subsystems": [ 00:23:09.414 { 00:23:09.414 "subsystem": "keyring", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "keyring_file_add_key", 00:23:09.414 "params": { 00:23:09.414 "name": "key0", 00:23:09.414 "path": "/tmp/tmp.lNsBQVAT5Y" 00:23:09.414 } 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "iobuf", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "iobuf_set_options", 00:23:09.414 "params": { 00:23:09.414 "large_bufsize": 135168, 00:23:09.414 "large_pool_count": 1024, 00:23:09.414 "small_bufsize": 8192, 00:23:09.414 "small_pool_count": 8192 00:23:09.414 } 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "sock", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "sock_set_default_impl", 00:23:09.414 "params": { 00:23:09.414 "impl_name": "posix" 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "sock_impl_set_options", 00:23:09.414 "params": { 00:23:09.414 "enable_ktls": false, 00:23:09.414 "enable_placement_id": 0, 00:23:09.414 "enable_quickack": false, 00:23:09.414 "enable_recv_pipe": true, 00:23:09.414 "enable_zerocopy_send_client": false, 00:23:09.414 "enable_zerocopy_send_server": true, 00:23:09.414 "impl_name": "ssl", 00:23:09.414 "recv_buf_size": 4096, 00:23:09.414 "send_buf_size": 4096, 00:23:09.414 "tls_version": 0, 00:23:09.414 "zerocopy_threshold": 0 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "sock_impl_set_options", 00:23:09.414 "params": { 00:23:09.414 "enable_ktls": false, 00:23:09.414 "enable_placement_id": 0, 00:23:09.414 "enable_quickack": false, 00:23:09.414 "enable_recv_pipe": true, 00:23:09.414 "enable_zerocopy_send_client": false, 00:23:09.414 "enable_zerocopy_send_server": true, 00:23:09.414 "impl_name": "posix", 00:23:09.414 "recv_buf_size": 2097152, 00:23:09.414 "send_buf_size": 2097152, 00:23:09.414 "tls_version": 0, 00:23:09.414 "zerocopy_threshold": 0 00:23:09.414 } 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "vmd", 00:23:09.414 "config": [] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "accel", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "accel_set_options", 00:23:09.414 "params": { 00:23:09.414 "buf_count": 2048, 00:23:09.414 "large_cache_size": 16, 00:23:09.414 "sequence_count": 2048, 00:23:09.414 "small_cache_size": 128, 00:23:09.414 "task_count": 2048 00:23:09.414 } 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "bdev", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "bdev_set_options", 00:23:09.414 "params": { 00:23:09.414 "bdev_auto_examine": true, 00:23:09.414 "bdev_io_cache_size": 256, 00:23:09.414 "bdev_io_pool_size": 65535, 00:23:09.414 "iobuf_large_cache_size": 16, 00:23:09.414 "iobuf_small_cache_size": 128 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_raid_set_options", 00:23:09.414 "params": { 00:23:09.414 "process_max_bandwidth_mb_sec": 0, 00:23:09.414 "process_window_size_kb": 1024 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_iscsi_set_options", 00:23:09.414 "params": { 00:23:09.414 "timeout_sec": 30 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_nvme_set_options", 00:23:09.414 "params": { 00:23:09.414 "action_on_timeout": "none", 00:23:09.414 "allow_accel_sequence": false, 00:23:09.414 "arbitration_burst": 0, 00:23:09.414 "bdev_retry_count": 3, 00:23:09.414 "ctrlr_loss_timeout_sec": 0, 00:23:09.414 "delay_cmd_submit": true, 00:23:09.414 "dhchap_dhgroups": [ 00:23:09.414 "null", 00:23:09.414 "ffdhe2048", 00:23:09.414 "ffdhe3072", 00:23:09.414 "ffdhe4096", 00:23:09.414 "ffdhe6144", 00:23:09.414 "ffdhe8192" 00:23:09.414 ], 00:23:09.414 "dhchap_digests": [ 00:23:09.414 "sha256", 00:23:09.414 "sha384", 00:23:09.414 "sha512" 00:23:09.414 ], 00:23:09.414 "disable_auto_failback": false, 00:23:09.414 "fast_io_fail_timeout_sec": 0, 00:23:09.414 "generate_uuids": false, 00:23:09.414 "high_priority_weight": 0, 00:23:09.414 "io_path_stat": false, 00:23:09.414 "io_queue_requests": 0, 00:23:09.414 "keep_alive_timeout_ms": 10000, 00:23:09.414 "low_priority_weight": 0, 00:23:09.414 "medium_priority_weight": 0, 00:23:09.414 "nvme_adminq_poll_period_us": 10000, 00:23:09.414 "nvme_error_stat": false, 00:23:09.414 "nvme_ioq_poll_period_us": 0, 00:23:09.414 "rdma_cm_event_timeout_ms": 0, 00:23:09.414 "rdma_max_cq_size": 0, 00:23:09.414 "rdma_srq_size": 0, 00:23:09.414 "reconnect_delay_sec": 0, 00:23:09.414 "timeout_admin_us": 0, 00:23:09.414 "timeout_us": 0, 00:23:09.414 "transport_ack_timeout": 0, 00:23:09.414 "transport_retry_count": 4, 00:23:09.414 "transport_tos": 0 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_nvme_set_hotplug", 00:23:09.414 "params": { 00:23:09.414 "enable": false, 00:23:09.414 "period_us": 100000 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_malloc_create", 00:23:09.414 "params": { 00:23:09.414 "block_size": 4096, 00:23:09.414 "dif_is_head_of_md": false, 00:23:09.414 "dif_pi_format": 0, 00:23:09.414 "dif_type": 0, 00:23:09.414 "md_size": 0, 00:23:09.414 "name": "malloc0", 00:23:09.414 "num_blocks": 8192, 00:23:09.414 "optimal_io_boundary": 0, 00:23:09.414 "physical_block_size": 4096, 00:23:09.414 "uuid": "89b50e64-cdb2-48b6-8797-9eb49b437133" 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "bdev_wait_for_examine" 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "nbd", 00:23:09.414 "config": [] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "scheduler", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "framework_set_scheduler", 00:23:09.414 "params": { 00:23:09.414 "name": "static" 00:23:09.414 } 00:23:09.414 } 00:23:09.414 ] 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "subsystem": "nvmf", 00:23:09.414 "config": [ 00:23:09.414 { 00:23:09.414 "method": "nvmf_set_config", 00:23:09.414 "params": { 00:23:09.414 "admin_cmd_passthru": { 00:23:09.414 "identify_ctrlr": false 00:23:09.414 }, 00:23:09.414 "dhchap_dhgroups": [ 00:23:09.414 "null", 00:23:09.414 "ffdhe2048", 00:23:09.414 "ffdhe3072", 00:23:09.414 "ffdhe4096", 00:23:09.414 "ffdhe6144", 00:23:09.414 "ffdhe8192" 00:23:09.414 ], 00:23:09.414 "dhchap_digests": [ 00:23:09.414 "sha256", 00:23:09.414 "sha384", 00:23:09.414 "sha512" 00:23:09.414 ], 00:23:09.414 "discovery_filter": "match_any" 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "nvmf_set_max_subsystems", 00:23:09.414 "params": { 00:23:09.414 "max_subsystems": 1024 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "nvmf_set_crdt", 00:23:09.414 "params": { 00:23:09.414 "crdt1": 0, 00:23:09.414 "crdt2": 0, 00:23:09.414 "crdt3": 0 00:23:09.414 } 00:23:09.414 }, 00:23:09.414 { 00:23:09.414 "method": "nvmf_create_transport", 00:23:09.414 "params": { 00:23:09.414 "abort_timeout_sec": 1, 00:23:09.414 "ack_timeout": 0, 00:23:09.414 "buf_cache_size": 4294967295, 00:23:09.414 "c2h_success": false, 00:23:09.414 "data_wr_pool_size": 0, 00:23:09.414 "dif_insert_or_strip": false, 00:23:09.414 "in_capsule_data_size": 4096, 00:23:09.414 "io_unit_size": 131072, 00:23:09.414 "max_aq_depth": 128, 00:23:09.415 "max_io_qpairs_per_ctrlr": 127, 00:23:09.415 "max_io_size": 131072, 00:23:09.415 "max_queue_depth": 128, 00:23:09.415 "num_shared_buffers": 511, 00:23:09.415 "sock_priority": 0, 00:23:09.415 "trtype": "TCP", 00:23:09.415 "zcopy": false 00:23:09.415 } 00:23:09.415 }, 00:23:09.415 { 00:23:09.415 "method": "nvmf_create_subsystem", 00:23:09.415 "params": { 00:23:09.415 "allow_any_host": false, 00:23:09.415 "ana_reporting": false, 00:23:09.415 "max_cntlid": 65519, 00:23:09.415 "max_namespaces": 32, 00:23:09.415 "min_cntlid": 1, 00:23:09.415 "model_number": "SPDK bdev Controller", 00:23:09.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.415 "serial_number": "00000000000000000000" 00:23:09.415 } 00:23:09.415 }, 00:23:09.415 { 00:23:09.415 "method": "nvmf_subsystem_add_host", 00:23:09.415 "params": { 00:23:09.415 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.415 "psk": "key0" 00:23:09.415 } 00:23:09.415 }, 00:23:09.415 { 00:23:09.415 "method": "nvmf_subsystem_add_ns", 00:23:09.415 "params": { 00:23:09.415 "namespace": { 00:23:09.415 "bdev_name": "malloc0", 00:23:09.415 "nguid": "89B50E64CDB248B687979EB49B437133", 00:23:09.415 "no_auto_visible": false, 00:23:09.415 "nsid": 1, 00:23:09.415 "uuid": "89b50e64-cdb2-48b6-8797-9eb49b437133" 00:23:09.415 }, 00:23:09.415 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:09.415 } 00:23:09.415 }, 00:23:09.415 { 00:23:09.415 "method": "nvmf_subsystem_add_listener", 00:23:09.415 "params": { 00:23:09.415 "listen_address": { 00:23:09.415 "adrfam": "IPv4", 00:23:09.415 "traddr": "10.0.0.3", 00:23:09.415 "trsvcid": "4420", 00:23:09.415 "trtype": "TCP" 00:23:09.415 }, 00:23:09.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.415 "secure_channel": false, 00:23:09.415 "sock_impl": "ssl" 00:23:09.415 } 00:23:09.415 } 00:23:09.415 ] 00:23:09.415 } 00:23:09.415 ] 00:23:09.415 }' 00:23:09.415 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:09.673 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:09.673 "subsystems": [ 00:23:09.673 { 00:23:09.673 "subsystem": "keyring", 00:23:09.673 "config": [ 00:23:09.673 { 00:23:09.673 "method": "keyring_file_add_key", 00:23:09.673 "params": { 00:23:09.673 "name": "key0", 00:23:09.673 "path": "/tmp/tmp.lNsBQVAT5Y" 00:23:09.673 } 00:23:09.673 } 00:23:09.673 ] 00:23:09.673 }, 00:23:09.673 { 00:23:09.673 "subsystem": "iobuf", 00:23:09.673 "config": [ 00:23:09.673 { 00:23:09.673 "method": "iobuf_set_options", 00:23:09.673 "params": { 00:23:09.673 "large_bufsize": 135168, 00:23:09.673 "large_pool_count": 1024, 00:23:09.673 "small_bufsize": 8192, 00:23:09.673 "small_pool_count": 8192 00:23:09.673 } 00:23:09.673 } 00:23:09.673 ] 00:23:09.673 }, 00:23:09.673 { 00:23:09.673 "subsystem": "sock", 00:23:09.674 "config": [ 00:23:09.674 { 00:23:09.674 "method": "sock_set_default_impl", 00:23:09.674 "params": { 00:23:09.674 "impl_name": "posix" 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "sock_impl_set_options", 00:23:09.674 "params": { 00:23:09.674 "enable_ktls": false, 00:23:09.674 "enable_placement_id": 0, 00:23:09.674 "enable_quickack": false, 00:23:09.674 "enable_recv_pipe": true, 00:23:09.674 "enable_zerocopy_send_client": false, 00:23:09.674 "enable_zerocopy_send_server": true, 00:23:09.674 "impl_name": "ssl", 00:23:09.674 "recv_buf_size": 4096, 00:23:09.674 "send_buf_size": 4096, 00:23:09.674 "tls_version": 0, 00:23:09.674 "zerocopy_threshold": 0 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "sock_impl_set_options", 00:23:09.674 "params": { 00:23:09.674 "enable_ktls": false, 00:23:09.674 "enable_placement_id": 0, 00:23:09.674 "enable_quickack": false, 00:23:09.674 "enable_recv_pipe": true, 00:23:09.674 "enable_zerocopy_send_client": false, 00:23:09.674 "enable_zerocopy_send_server": true, 00:23:09.674 "impl_name": "posix", 00:23:09.674 "recv_buf_size": 2097152, 00:23:09.674 "send_buf_size": 2097152, 00:23:09.674 "tls_version": 0, 00:23:09.674 "zerocopy_threshold": 0 00:23:09.674 } 00:23:09.674 } 00:23:09.674 ] 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "subsystem": "vmd", 00:23:09.674 "config": [] 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "subsystem": "accel", 00:23:09.674 "config": [ 00:23:09.674 { 00:23:09.674 "method": "accel_set_options", 00:23:09.674 "params": { 00:23:09.674 "buf_count": 2048, 00:23:09.674 "large_cache_size": 16, 00:23:09.674 "sequence_count": 2048, 00:23:09.674 "small_cache_size": 128, 00:23:09.674 "task_count": 2048 00:23:09.674 } 00:23:09.674 } 00:23:09.674 ] 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "subsystem": "bdev", 00:23:09.674 "config": [ 00:23:09.674 { 00:23:09.674 "method": "bdev_set_options", 00:23:09.674 "params": { 00:23:09.674 "bdev_auto_examine": true, 00:23:09.674 "bdev_io_cache_size": 256, 00:23:09.674 "bdev_io_pool_size": 65535, 00:23:09.674 "iobuf_large_cache_size": 16, 00:23:09.674 "iobuf_small_cache_size": 128 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_raid_set_options", 00:23:09.674 "params": { 00:23:09.674 "process_max_bandwidth_mb_sec": 0, 00:23:09.674 "process_window_size_kb": 1024 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_iscsi_set_options", 00:23:09.674 "params": { 00:23:09.674 "timeout_sec": 30 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_nvme_set_options", 00:23:09.674 "params": { 00:23:09.674 "action_on_timeout": "none", 00:23:09.674 "allow_accel_sequence": false, 00:23:09.674 "arbitration_burst": 0, 00:23:09.674 "bdev_retry_count": 3, 00:23:09.674 "ctrlr_loss_timeout_sec": 0, 00:23:09.674 "delay_cmd_submit": true, 00:23:09.674 "dhchap_dhgroups": [ 00:23:09.674 "null", 00:23:09.674 "ffdhe2048", 00:23:09.674 "ffdhe3072", 00:23:09.674 "ffdhe4096", 00:23:09.674 "ffdhe6144", 00:23:09.674 "ffdhe8192" 00:23:09.674 ], 00:23:09.674 "dhchap_digests": [ 00:23:09.674 "sha256", 00:23:09.674 "sha384", 00:23:09.674 "sha512" 00:23:09.674 ], 00:23:09.674 "disable_auto_failback": false, 00:23:09.674 "fast_io_fail_timeout_sec": 0, 00:23:09.674 "generate_uuids": false, 00:23:09.674 "high_priority_weight": 0, 00:23:09.674 "io_path_stat": false, 00:23:09.674 "io_queue_requests": 512, 00:23:09.674 "keep_alive_timeout_ms": 10000, 00:23:09.674 "low_priority_weight": 0, 00:23:09.674 "medium_priority_weight": 0, 00:23:09.674 "nvme_adminq_poll_period_us": 10000, 00:23:09.674 "nvme_error_stat": false, 00:23:09.674 "nvme_ioq_poll_period_us": 0, 00:23:09.674 "rdma_cm_event_timeout_ms": 0, 00:23:09.674 "rdma_max_cq_size": 0, 00:23:09.674 "rdma_srq_size": 0, 00:23:09.674 "reconnect_delay_sec": 0, 00:23:09.674 "timeout_admin_us": 0, 00:23:09.674 "timeout_us": 0, 00:23:09.674 "transport_ack_timeout": 0, 00:23:09.674 "transport_retry_count": 4, 00:23:09.674 "transport_tos": 0 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_nvme_attach_controller", 00:23:09.674 "params": { 00:23:09.674 "adrfam": "IPv4", 00:23:09.674 "ctrlr_loss_timeout_sec": 0, 00:23:09.674 "ddgst": false, 00:23:09.674 "fast_io_fail_timeout_sec": 0, 00:23:09.674 "hdgst": false, 00:23:09.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.674 "name": "nvme0", 00:23:09.674 "prchk_guard": false, 00:23:09.674 "prchk_reftag": false, 00:23:09.674 "psk": "key0", 00:23:09.674 "reconnect_delay_sec": 0, 00:23:09.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.674 "traddr": "10.0.0.3", 00:23:09.674 "trsvcid": "4420", 00:23:09.674 "trtype": "TCP" 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_nvme_set_hotplug", 00:23:09.674 "params": { 00:23:09.674 "enable": false, 00:23:09.674 "period_us": 100000 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_enable_histogram", 00:23:09.674 "params": { 00:23:09.674 "enable": true, 00:23:09.674 "name": "nvme0n1" 00:23:09.674 } 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "method": "bdev_wait_for_examine" 00:23:09.674 } 00:23:09.674 ] 00:23:09.674 }, 00:23:09.674 { 00:23:09.674 "subsystem": "nbd", 00:23:09.674 "config": [] 00:23:09.674 } 00:23:09.674 ] 00:23:09.674 }' 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 101984 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101984 ']' 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101984 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101984 00:23:09.674 killing process with pid 101984 00:23:09.674 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.674 00:23:09.674 Latency(us) 00:23:09.674 [2024-12-14T18:43:25.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.674 [2024-12-14T18:43:25.427Z] =================================================================================================================== 00:23:09.674 [2024-12-14T18:43:25.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101984' 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101984 00:23:09.674 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101984 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 101947 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101947 ']' 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101947 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101947 00:23:09.932 killing process with pid 101947 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101947' 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101947 00:23:09.932 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101947 00:23:10.190 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:10.190 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:10.190 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:10.190 "subsystems": [ 00:23:10.190 { 00:23:10.190 "subsystem": "keyring", 00:23:10.190 "config": [ 00:23:10.190 { 00:23:10.190 "method": "keyring_file_add_key", 00:23:10.190 "params": { 00:23:10.190 "name": "key0", 00:23:10.190 "path": "/tmp/tmp.lNsBQVAT5Y" 00:23:10.190 } 00:23:10.190 } 00:23:10.190 ] 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "subsystem": "iobuf", 00:23:10.190 "config": [ 00:23:10.190 { 00:23:10.190 "method": "iobuf_set_options", 00:23:10.190 "params": { 00:23:10.190 "large_bufsize": 135168, 00:23:10.190 "large_pool_count": 1024, 00:23:10.190 "small_bufsize": 8192, 00:23:10.190 "small_pool_count": 8192 00:23:10.190 } 00:23:10.190 } 00:23:10.190 ] 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "subsystem": "sock", 00:23:10.190 "config": [ 00:23:10.190 { 00:23:10.190 "method": "sock_set_default_impl", 00:23:10.190 "params": { 00:23:10.190 "impl_name": "posix" 00:23:10.190 } 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "method": "sock_impl_set_options", 00:23:10.190 "params": { 00:23:10.190 "enable_ktls": false, 00:23:10.190 "enable_placement_id": 0, 00:23:10.190 "enable_quickack": false, 00:23:10.190 "enable_recv_pipe": true, 00:23:10.190 "enable_zerocopy_send_client": false, 00:23:10.190 "enable_zerocopy_send_server": true, 00:23:10.190 "impl_name": "ssl", 00:23:10.190 "recv_buf_size": 4096, 00:23:10.190 "send_buf_size": 4096, 00:23:10.190 "tls_version": 0, 00:23:10.190 "zerocopy_threshold": 0 00:23:10.190 } 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "method": "sock_impl_set_options", 00:23:10.190 "params": { 00:23:10.190 "enable_ktls": false, 00:23:10.190 "enable_placement_id": 0, 00:23:10.190 "enable_quickack": false, 00:23:10.190 "enable_recv_pipe": true, 00:23:10.190 "enable_zerocopy_send_client": false, 00:23:10.190 "enable_zerocopy_send_server": true, 00:23:10.190 "impl_name": "posix", 00:23:10.190 "recv_buf_size": 2097152, 00:23:10.190 "send_buf_size": 2097152, 00:23:10.190 "tls_version": 0, 00:23:10.190 "zerocopy_threshold": 0 00:23:10.190 } 00:23:10.190 } 00:23:10.190 ] 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "subsystem": "vmd", 00:23:10.190 "config": [] 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "subsystem": "accel", 00:23:10.190 "config": [ 00:23:10.190 { 00:23:10.190 "method": "accel_set_options", 00:23:10.190 "params": { 00:23:10.190 "buf_count": 2048, 00:23:10.190 "large_cache_size": 16, 00:23:10.190 "sequence_count": 2048, 00:23:10.190 "small_cache_size": 128, 00:23:10.190 "task_count": 2048 00:23:10.190 } 00:23:10.190 } 00:23:10.190 ] 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "subsystem": "bdev", 00:23:10.190 "config": [ 00:23:10.190 { 00:23:10.190 "method": "bdev_set_options", 00:23:10.190 "params": { 00:23:10.190 "bdev_auto_examine": true, 00:23:10.190 "bdev_io_cache_size": 256, 00:23:10.190 "bdev_io_pool_size": 65535, 00:23:10.190 "iobuf_large_cache_size": 16, 00:23:10.190 "iobuf_small_cache_size": 128 00:23:10.190 } 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "method": "bdev_raid_set_options", 00:23:10.190 "params": { 00:23:10.190 "process_max_bandwidth_mb_sec": 0, 00:23:10.190 "process_window_size_kb": 1024 00:23:10.190 } 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "method": "bdev_iscsi_set_options", 00:23:10.190 "params": { 00:23:10.190 "timeout_sec": 30 00:23:10.190 } 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "method": "bdev_nvme_set_options", 00:23:10.190 "params": { 00:23:10.190 "action_on_timeout": "none", 00:23:10.190 "allow_accel_sequence": false, 00:23:10.190 "arbitration_burst": 0, 00:23:10.190 "bdev_retry_count": 3, 00:23:10.190 "ctrlr_loss_timeout_sec": 0, 00:23:10.190 "delay_cmd_submit": true, 00:23:10.190 "dhchap_dhgroups": [ 00:23:10.190 "null", 00:23:10.190 "ffdhe2048", 00:23:10.190 "ffdhe3072", 00:23:10.190 "ffdhe4096", 00:23:10.190 "ffdhe6144", 00:23:10.190 "ffdhe8192" 00:23:10.190 ], 00:23:10.191 "dhchap_digests": [ 00:23:10.191 "sha256", 00:23:10.191 "sha384", 00:23:10.191 "sha512" 00:23:10.191 ], 00:23:10.191 "disable_auto_failback": false, 00:23:10.191 "fast_io_fail_timeout_sec": 0, 00:23:10.191 "generate_uuids": false, 00:23:10.191 "high_priority_weight": 0, 00:23:10.191 "io_path_stat": false, 00:23:10.191 "io_queue_requests": 0, 00:23:10.191 "keep_alive_timeout_ms": 10000, 00:23:10.191 "low_priority_weight": 0, 00:23:10.191 "medium_priority_weight": 0, 00:23:10.191 "nvme_adminq_poll_period_us": 10000, 00:23:10.191 "nvme_error_stat": false, 00:23:10.191 "nvme_ioq_poll_period_us": 0, 00:23:10.191 "rdma_cm_event_timeout_ms": 0, 00:23:10.191 "rdma_max_cq_size": 0, 00:23:10.191 "rdma_srq_size": 0, 00:23:10.191 "reconnect_delay_sec": 0, 00:23:10.191 "timeout_admin_us": 0, 00:23:10.191 "timeout_us": 0, 00:23:10.191 "transport_ack_timeout": 0, 00:23:10.191 "transport_retry_count": 4, 00:23:10.191 "transport_tos": 0 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "bdev_nvme_set_hotplug", 00:23:10.191 "params": { 00:23:10.191 "enable": false, 00:23:10.191 "period_us": 100000 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "bdev_malloc_create", 00:23:10.191 "params": { 00:23:10.191 "block_size": 4096, 00:23:10.191 "dif_is_head_of_md": false, 00:23:10.191 "dif_pi_format": 0, 00:23:10.191 "dif_type": 0, 00:23:10.191 "md_size": 0, 00:23:10.191 "name": "malloc0", 00:23:10.191 "num_blocks": 8192, 00:23:10.191 "optimal_io_boundary": 0, 00:23:10.191 "physical_block_size": 4096, 00:23:10.191 "uuid": "89b50e64-cdb2-48b6-8797-9eb49b437133" 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "bdev_wait_for_examine" 00:23:10.191 } 00:23:10.191 ] 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "subsystem": "nbd", 00:23:10.191 "config": [] 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "subsystem": "scheduler", 00:23:10.191 "config": [ 00:23:10.191 { 00:23:10.191 "method": "framework_set_scheduler", 00:23:10.191 "params": { 00:23:10.191 "name": "static" 00:23:10.191 } 00:23:10.191 } 00:23:10.191 ] 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "subsystem": "nvmf", 00:23:10.191 "config": [ 00:23:10.191 { 00:23:10.191 "method": "nvmf_set_config", 00:23:10.191 "params": { 00:23:10.191 "admin_cmd_passthru": { 00:23:10.191 "identify_ctrlr": false 00:23:10.191 }, 00:23:10.191 "dhchap_dhgroups": [ 00:23:10.191 "null", 00:23:10.191 "ffdhe2048", 00:23:10.191 "ffdhe3072", 00:23:10.191 "ffdhe4096", 00:23:10.191 "ffdhe6144", 00:23:10.191 "ffdhe8192" 00:23:10.191 ], 00:23:10.191 "dhchap_digests": [ 00:23:10.191 "sha256", 00:23:10.191 "sha384", 00:23:10.191 "sha512" 00:23:10.191 ], 00:23:10.191 "discovery_filter": "match_any" 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_set_max_subsystems", 00:23:10.191 "params": { 00:23:10.191 "max_subsystems": 1024 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_set_crdt", 00:23:10.191 "params": { 00:23:10.191 "crdt1": 0, 00:23:10.191 "crdt2": 0, 00:23:10.191 "crdt3": 0 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_create_transport", 00:23:10.191 "params": { 00:23:10.191 "abort_timeout_sec": 1, 00:23:10.191 "ack_timeout": 0, 00:23:10.191 "buf_cache_size": 4294967295, 00:23:10.191 "c2h_success": false, 00:23:10.191 "data_wr_pool_size": 0, 00:23:10.191 "dif_insert_or_strip": false, 00:23:10.191 "in_capsule_data_size": 4096, 00:23:10.191 "io_unit_size": 131072, 00:23:10.191 "max_aq_depth": 128, 00:23:10.191 "max_io_qpairs_per_ctrlr": 127, 00:23:10.191 "max_io_size": 131072, 00:23:10.191 "max_queue_depth": 128, 00:23:10.191 "num_shared_buffers": 511, 00:23:10.191 "sock_priority": 0, 00:23:10.191 "trtype": "TCP", 00:23:10.191 "zcopy": false 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_create_subsystem", 00:23:10.191 "params": { 00:23:10.191 "allow_any_host": false, 00:23:10.191 "ana_reporting": false, 00:23:10.191 "max_cntlid": 65519, 00:23:10.191 "max_namespaces": 32, 00:23:10.191 "min_cntlid": 1, 00:23:10.191 "model_number": "SPDK bdev Controller", 00:23:10.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.191 "serial_number": "00000000000000000000" 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_subsystem_add_host", 00:23:10.191 "params": { 00:23:10.191 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.191 "psk": "key0" 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_subsystem_add_ns", 00:23:10.191 "params": { 00:23:10.191 "namespace": { 00:23:10.191 "bdev_name": "malloc0", 00:23:10.191 "nguid": "89B50E64CDB248B687979EB49B437133", 00:23:10.191 "no_auto_visible": false, 00:23:10.191 "nsid": 1, 00:23:10.191 "uuid": "89b50e64-cdb2-48b6-8797-9eb49b437133" 00:23:10.191 }, 00:23:10.191 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:10.191 } 00:23:10.191 }, 00:23:10.191 { 00:23:10.191 "method": "nvmf_subsystem_add_listener", 00:23:10.191 "params": { 00:23:10.191 "listen_address": { 00:23:10.191 "adrfam": "IPv4", 00:23:10.191 "traddr": "10.0.0.3", 00:23:10.191 "trsvcid": "4420", 00:23:10.191 "trtype": "TCP" 00:23:10.191 }, 00:23:10.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.191 "secure_channel": false, 00:23:10.191 "sock_impl": "ssl" 00:23:10.191 } 00:23:10.191 } 00:23:10.191 ] 00:23:10.191 } 00:23:10.191 ] 00:23:10.191 }' 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=102076 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:10.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 102076 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102076 ']' 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.191 18:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 [2024-12-14 18:43:25.978493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:10.450 [2024-12-14 18:43:25.978647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.450 [2024-12-14 18:43:26.111611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.450 [2024-12-14 18:43:26.192222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.450 [2024-12-14 18:43:26.192290] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.450 [2024-12-14 18:43:26.192317] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.450 [2024-12-14 18:43:26.192325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.450 [2024-12-14 18:43:26.192331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.450 [2024-12-14 18:43:26.192412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.017 [2024-12-14 18:43:26.460997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.017 [2024-12-14 18:43:26.500561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.017 [2024-12-14 18:43:26.500795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=102120 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 102120 /var/tmp/bdevperf.sock 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102120 ']' 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:11.276 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:11.276 "subsystems": [ 00:23:11.276 { 00:23:11.276 "subsystem": "keyring", 00:23:11.276 "config": [ 00:23:11.276 { 00:23:11.276 "method": "keyring_file_add_key", 00:23:11.276 "params": { 00:23:11.276 "name": "key0", 00:23:11.276 "path": "/tmp/tmp.lNsBQVAT5Y" 00:23:11.276 } 00:23:11.276 } 00:23:11.276 ] 00:23:11.276 }, 00:23:11.276 { 00:23:11.276 "subsystem": "iobuf", 00:23:11.276 "config": [ 00:23:11.276 { 00:23:11.276 "method": "iobuf_set_options", 00:23:11.276 "params": { 00:23:11.276 "large_bufsize": 135168, 00:23:11.276 "large_pool_count": 1024, 00:23:11.276 "small_bufsize": 8192, 00:23:11.276 "small_pool_count": 8192 00:23:11.276 } 00:23:11.276 } 00:23:11.277 ] 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "subsystem": "sock", 00:23:11.277 "config": [ 00:23:11.277 { 00:23:11.277 "method": "sock_set_default_impl", 00:23:11.277 "params": { 00:23:11.277 "impl_name": "posix" 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "sock_impl_set_options", 00:23:11.277 "params": { 00:23:11.277 "enable_ktls": false, 00:23:11.277 "enable_placement_id": 0, 00:23:11.277 "enable_quickack": false, 00:23:11.277 "enable_recv_pipe": true, 00:23:11.277 "enable_zerocopy_send_client": false, 00:23:11.277 "enable_zerocopy_send_server": true, 00:23:11.277 "impl_name": "ssl", 00:23:11.277 "recv_buf_size": 4096, 00:23:11.277 "send_buf_size": 4096, 00:23:11.277 "tls_version": 0, 00:23:11.277 "zerocopy_threshold": 0 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "sock_impl_set_options", 00:23:11.277 "params": { 00:23:11.277 "enable_ktls": false, 00:23:11.277 "enable_placement_id": 0, 00:23:11.277 "enable_quickack": false, 00:23:11.277 "enable_recv_pipe": true, 00:23:11.277 "enable_zerocopy_send_client": false, 00:23:11.277 "enable_zerocopy_send_server": true, 00:23:11.277 "impl_name": "posix", 00:23:11.277 "recv_buf_size": 2097152, 00:23:11.277 "send_buf_size": 2097152, 00:23:11.277 "tls_version": 0, 00:23:11.277 "zerocopy_threshold": 0 00:23:11.277 } 00:23:11.277 } 00:23:11.277 ] 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "subsystem": "vmd", 00:23:11.277 "config": [] 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "subsystem": "accel", 00:23:11.277 "config": [ 00:23:11.277 { 00:23:11.277 "method": "accel_set_options", 00:23:11.277 "params": { 00:23:11.277 "buf_count": 2048, 00:23:11.277 "large_cache_size": 16, 00:23:11.277 "sequence_count": 2048, 00:23:11.277 "small_cache_size": 128, 00:23:11.277 "task_count": 2048 00:23:11.277 } 00:23:11.277 } 00:23:11.277 ] 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "subsystem": "bdev", 00:23:11.277 "config": [ 00:23:11.277 { 00:23:11.277 "method": "bdev_set_options", 00:23:11.277 "params": { 00:23:11.277 "bdev_auto_examine": true, 00:23:11.277 "bdev_io_cache_size": 256, 00:23:11.277 "bdev_io_pool_size": 65535, 00:23:11.277 "iobuf_large_cache_size": 16, 00:23:11.277 "iobuf_small_cache_size": 128 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_raid_set_options", 00:23:11.277 "params": { 00:23:11.277 "process_max_bandwidth_mb_sec": 0, 00:23:11.277 "process_window_size_kb": 1024 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_iscsi_set_options", 00:23:11.277 "params": { 00:23:11.277 "timeout_sec": 30 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_nvme_set_options", 00:23:11.277 "params": { 00:23:11.277 "action_on_timeout": "none", 00:23:11.277 "allow_accel_sequence": false, 00:23:11.277 "arbitration_burst": 0, 00:23:11.277 "bdev_retry_count": 3, 00:23:11.277 "ctrlr_loss_timeout_sec": 0, 00:23:11.277 "delay_cmd_submit": true, 00:23:11.277 "dhchap_dhgroups": [ 00:23:11.277 "null", 00:23:11.277 "ffdhe2048", 00:23:11.277 "ffdhe3072", 00:23:11.277 "ffdhe4096", 00:23:11.277 "ffdhe6144", 00:23:11.277 "ffdhe8192" 00:23:11.277 ], 00:23:11.277 "dhchap_digests": [ 00:23:11.277 "sha256", 00:23:11.277 "sha384", 00:23:11.277 "sha512" 00:23:11.277 ], 00:23:11.277 "disable_auto_failback": false, 00:23:11.277 "fast_io_fail_timeout_sec": 0, 00:23:11.277 "generate_uuids": false, 00:23:11.277 "high_priority_weight": 0, 00:23:11.277 "io_path_stat": false, 00:23:11.277 "io_queue_requests": 512, 00:23:11.277 "keep_alive_timeout_ms": 10000, 00:23:11.277 "low_priority_weight": 0, 00:23:11.277 "medium_priority_weight": 0, 00:23:11.277 "nvme_adminq_poll_period_us": 10000, 00:23:11.277 "nvme_error_stat": false, 00:23:11.277 "nvme_ioq_poll_period_us": 0, 00:23:11.277 "rdma_cm_event_timeout_ms": 0, 00:23:11.277 "rdma_max_cq_size": 0, 00:23:11.277 "rdma_srq_size": 0, 00:23:11.277 "reconnect_delay_sec": 0, 00:23:11.277 "timeout_admin_us": 0, 00:23:11.277 "timeout_us": 0, 00:23:11.277 "transport_ack_timeout": 0, 00:23:11.277 "transport_retry_count": 4, 00:23:11.277 "transport_tos": 0 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_nvme_attach_controller", 00:23:11.277 "params": { 00:23:11.277 "adrfam": "IPv4", 00:23:11.277 "ctrlr_loss_timeout_sec": 0, 00:23:11.277 "ddgst": false, 00:23:11.277 "fast_io_fail_timeout_sec": 0, 00:23:11.277 "hdgst": false, 00:23:11.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.277 "name": "nvme0", 00:23:11.277 "prchk_guard": false, 00:23:11.277 "prchk_reftag": false, 00:23:11.277 "psk": "key0", 00:23:11.277 "reconnect_delay_sec": 0, 00:23:11.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.277 "traddr": "10.0.0.3", 00:23:11.277 "trsvcid": "4420", 00:23:11.277 "trtype": "TCP" 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_nvme_set_hotplug", 00:23:11.277 "params": { 00:23:11.277 "enable": false, 00:23:11.277 "period_us": 100000 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_enable_histogram", 00:23:11.277 "params": { 00:23:11.277 "enable": true, 00:23:11.277 "name": "nvme0n1" 00:23:11.277 } 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "method": "bdev_wait_for_examine" 00:23:11.277 } 00:23:11.277 ] 00:23:11.277 }, 00:23:11.277 { 00:23:11.277 "subsystem": "nbd", 00:23:11.277 "config": [] 00:23:11.277 } 00:23:11.277 ] 00:23:11.277 }' 00:23:11.277 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.277 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.277 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.277 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.277 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.536 [2024-12-14 18:43:27.052217] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:11.536 [2024-12-14 18:43:27.052337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102120 ] 00:23:11.536 [2024-12-14 18:43:27.193918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.795 [2024-12-14 18:43:27.289373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.795 [2024-12-14 18:43:27.496546] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.363 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.363 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:12.363 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.363 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:12.622 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.622 18:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.880 Running I/O for 1 seconds... 00:23:13.817 3991.00 IOPS, 15.59 MiB/s 00:23:13.817 Latency(us) 00:23:13.817 [2024-12-14T18:43:29.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.817 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:13.817 Verification LBA range: start 0x0 length 0x2000 00:23:13.817 nvme0n1 : 1.01 4063.93 15.87 0.00 0.00 31300.80 5183.30 25261.15 00:23:13.817 [2024-12-14T18:43:29.570Z] =================================================================================================================== 00:23:13.817 [2024-12-14T18:43:29.570Z] Total : 4063.93 15.87 0.00 0.00 31300.80 5183.30 25261.15 00:23:13.817 { 00:23:13.817 "results": [ 00:23:13.817 { 00:23:13.817 "job": "nvme0n1", 00:23:13.817 "core_mask": "0x2", 00:23:13.817 "workload": "verify", 00:23:13.817 "status": "finished", 00:23:13.817 "verify_range": { 00:23:13.817 "start": 0, 00:23:13.817 "length": 8192 00:23:13.817 }, 00:23:13.817 "queue_depth": 128, 00:23:13.817 "io_size": 4096, 00:23:13.817 "runtime": 1.01355, 00:23:13.817 "iops": 4063.933698386858, 00:23:13.817 "mibps": 15.874741009323664, 00:23:13.817 "io_failed": 0, 00:23:13.817 "io_timeout": 0, 00:23:13.817 "avg_latency_us": 31300.80229534971, 00:23:13.817 "min_latency_us": 5183.301818181818, 00:23:13.817 "max_latency_us": 25261.14909090909 00:23:13.817 } 00:23:13.817 ], 00:23:13.817 "core_count": 1 00:23:13.817 } 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:13.817 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:13.817 nvmf_trace.0 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 102120 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102120 ']' 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102120 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102120 00:23:14.076 killing process with pid 102120 00:23:14.076 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.076 00:23:14.076 Latency(us) 00:23:14.076 [2024-12-14T18:43:29.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.076 [2024-12-14T18:43:29.829Z] =================================================================================================================== 00:23:14.076 [2024-12-14T18:43:29.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102120' 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102120 00:23:14.076 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102120 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.335 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.335 rmmod nvme_tcp 00:23:14.335 rmmod nvme_fabrics 00:23:14.335 rmmod nvme_keyring 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 102076 ']' 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 102076 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102076 ']' 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102076 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.335 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102076 00:23:14.593 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:14.593 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:14.593 killing process with pid 102076 00:23:14.593 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102076' 00:23:14.593 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102076 00:23:14.593 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102076 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9yUKkHxCB4 /tmp/tmp.tdzgUTXCIY /tmp/tmp.lNsBQVAT5Y 00:23:14.852 00:23:14.852 real 1m31.904s 00:23:14.852 user 2m24.773s 00:23:14.852 sys 0m33.113s 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:14.852 ************************************ 00:23:14.852 END TEST nvmf_tls 00:23:14.852 ************************************ 00:23:14.852 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:15.112 ************************************ 00:23:15.112 START TEST nvmf_fips 00:23:15.112 ************************************ 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:15.112 * Looking for test storage... 00:23:15.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.112 --rc genhtml_branch_coverage=1 00:23:15.112 --rc genhtml_function_coverage=1 00:23:15.112 --rc genhtml_legend=1 00:23:15.112 --rc geninfo_all_blocks=1 00:23:15.112 --rc geninfo_unexecuted_blocks=1 00:23:15.112 00:23:15.112 ' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.112 --rc genhtml_branch_coverage=1 00:23:15.112 --rc genhtml_function_coverage=1 00:23:15.112 --rc genhtml_legend=1 00:23:15.112 --rc geninfo_all_blocks=1 00:23:15.112 --rc geninfo_unexecuted_blocks=1 00:23:15.112 00:23:15.112 ' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.112 --rc genhtml_branch_coverage=1 00:23:15.112 --rc genhtml_function_coverage=1 00:23:15.112 --rc genhtml_legend=1 00:23:15.112 --rc geninfo_all_blocks=1 00:23:15.112 --rc geninfo_unexecuted_blocks=1 00:23:15.112 00:23:15.112 ' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.112 --rc genhtml_branch_coverage=1 00:23:15.112 --rc genhtml_function_coverage=1 00:23:15.112 --rc genhtml_legend=1 00:23:15.112 --rc geninfo_all_blocks=1 00:23:15.112 --rc geninfo_unexecuted_blocks=1 00:23:15.112 00:23:15.112 ' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.112 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.372 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:15.373 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:15.373 Error setting digest 00:23:15.373 40B274C42C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:15.373 40B274C42C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:15.373 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:15.374 Cannot find device "nvmf_init_br" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:15.374 Cannot find device "nvmf_init_br2" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:15.374 Cannot find device "nvmf_tgt_br" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:15.374 Cannot find device "nvmf_tgt_br2" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:15.374 Cannot find device "nvmf_init_br" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:15.374 Cannot find device "nvmf_init_br2" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:15.374 Cannot find device "nvmf_tgt_br" 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:23:15.374 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:15.632 Cannot find device "nvmf_tgt_br2" 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:15.632 Cannot find device "nvmf_br" 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:15.632 Cannot find device "nvmf_init_if" 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:15.632 Cannot find device "nvmf_init_if2" 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:15.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:15.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:15.632 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:15.633 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:15.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:15.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:15.892 00:23:15.892 --- 10.0.0.3 ping statistics --- 00:23:15.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.892 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:15.892 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:15.892 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:23:15.892 00:23:15.892 --- 10.0.0.4 ping statistics --- 00:23:15.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.892 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:15.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:15.892 00:23:15.892 --- 10.0.0.1 ping statistics --- 00:23:15.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.892 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:15.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:23:15.892 00:23:15.892 --- 10.0.0.2 ping statistics --- 00:23:15.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.892 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=102455 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 102455 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 102455 ']' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.892 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:15.892 [2024-12-14 18:43:31.576476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.892 [2024-12-14 18:43:31.577316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.151 [2024-12-14 18:43:31.716622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.151 [2024-12-14 18:43:31.803444] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.151 [2024-12-14 18:43:31.803522] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.151 [2024-12-14 18:43:31.803534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.151 [2024-12-14 18:43:31.803541] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.151 [2024-12-14 18:43:31.803548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.151 [2024-12-14 18:43:31.803580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.em0 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.em0 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.em0 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.em0 00:23:17.091 18:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.367 [2024-12-14 18:43:32.904200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.367 [2024-12-14 18:43:32.920151] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.367 [2024-12-14 18:43:32.920426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.367 malloc0 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=102515 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 102515 /var/tmp/bdevperf.sock 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 102515 ']' 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.367 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:17.367 [2024-12-14 18:43:33.075699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:17.367 [2024-12-14 18:43:33.075773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102515 ] 00:23:17.642 [2024-12-14 18:43:33.211043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.642 [2024-12-14 18:43:33.296288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.901 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.901 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:17.901 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.em0 00:23:18.160 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.160 [2024-12-14 18:43:33.895773] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.418 TLSTESTn1 00:23:18.418 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.418 Running I/O for 10 seconds... 00:23:20.730 3400.00 IOPS, 13.28 MiB/s [2024-12-14T18:43:37.418Z] 3420.00 IOPS, 13.36 MiB/s [2024-12-14T18:43:38.354Z] 3461.33 IOPS, 13.52 MiB/s [2024-12-14T18:43:39.289Z] 3498.50 IOPS, 13.67 MiB/s [2024-12-14T18:43:40.224Z] 3522.20 IOPS, 13.76 MiB/s [2024-12-14T18:43:41.158Z] 3536.17 IOPS, 13.81 MiB/s [2024-12-14T18:43:42.094Z] 3561.00 IOPS, 13.91 MiB/s [2024-12-14T18:43:43.472Z] 3604.62 IOPS, 14.08 MiB/s [2024-12-14T18:43:44.408Z] 3613.33 IOPS, 14.11 MiB/s [2024-12-14T18:43:44.408Z] 3632.90 IOPS, 14.19 MiB/s 00:23:28.655 Latency(us) 00:23:28.655 [2024-12-14T18:43:44.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.655 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.655 Verification LBA range: start 0x0 length 0x2000 00:23:28.655 TLSTESTn1 : 10.02 3637.60 14.21 0.00 0.00 35120.44 8757.99 25976.09 00:23:28.655 [2024-12-14T18:43:44.408Z] =================================================================================================================== 00:23:28.655 [2024-12-14T18:43:44.408Z] Total : 3637.60 14.21 0.00 0.00 35120.44 8757.99 25976.09 00:23:28.655 { 00:23:28.655 "results": [ 00:23:28.655 { 00:23:28.655 "job": "TLSTESTn1", 00:23:28.655 "core_mask": "0x4", 00:23:28.655 "workload": "verify", 00:23:28.655 "status": "finished", 00:23:28.655 "verify_range": { 00:23:28.655 "start": 0, 00:23:28.655 "length": 8192 00:23:28.655 }, 00:23:28.655 "queue_depth": 128, 00:23:28.655 "io_size": 4096, 00:23:28.655 "runtime": 10.021727, 00:23:28.655 "iops": 3637.5965938804757, 00:23:28.655 "mibps": 14.209361694845608, 00:23:28.655 "io_failed": 0, 00:23:28.655 "io_timeout": 0, 00:23:28.655 "avg_latency_us": 35120.436868767225, 00:23:28.655 "min_latency_us": 8757.992727272727, 00:23:28.655 "max_latency_us": 25976.087272727273 00:23:28.655 } 00:23:28.655 ], 00:23:28.655 "core_count": 1 00:23:28.655 } 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:28.655 nvmf_trace.0 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 102515 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 102515 ']' 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 102515 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102515 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.655 killing process with pid 102515 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102515' 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 102515 00:23:28.655 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.655 00:23:28.655 Latency(us) 00:23:28.655 [2024-12-14T18:43:44.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.655 [2024-12-14T18:43:44.408Z] =================================================================================================================== 00:23:28.655 [2024-12-14T18:43:44.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.655 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 102515 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.914 rmmod nvme_tcp 00:23:28.914 rmmod nvme_fabrics 00:23:28.914 rmmod nvme_keyring 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 102455 ']' 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 102455 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 102455 ']' 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 102455 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102455 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.914 killing process with pid 102455 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102455' 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 102455 00:23:28.914 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 102455 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:29.173 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:29.432 18:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.em0 00:23:29.432 00:23:29.432 real 0m14.491s 00:23:29.432 user 0m18.201s 00:23:29.432 sys 0m6.627s 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.432 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.432 ************************************ 00:23:29.432 END TEST nvmf_fips 00:23:29.432 ************************************ 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.692 ************************************ 00:23:29.692 START TEST nvmf_control_msg_list 00:23:29.692 ************************************ 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:29.692 * Looking for test storage... 00:23:29.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.692 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.692 --rc genhtml_branch_coverage=1 00:23:29.692 --rc genhtml_function_coverage=1 00:23:29.692 --rc genhtml_legend=1 00:23:29.692 --rc geninfo_all_blocks=1 00:23:29.692 --rc geninfo_unexecuted_blocks=1 00:23:29.693 00:23:29.693 ' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.693 --rc genhtml_branch_coverage=1 00:23:29.693 --rc genhtml_function_coverage=1 00:23:29.693 --rc genhtml_legend=1 00:23:29.693 --rc geninfo_all_blocks=1 00:23:29.693 --rc geninfo_unexecuted_blocks=1 00:23:29.693 00:23:29.693 ' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.693 --rc genhtml_branch_coverage=1 00:23:29.693 --rc genhtml_function_coverage=1 00:23:29.693 --rc genhtml_legend=1 00:23:29.693 --rc geninfo_all_blocks=1 00:23:29.693 --rc geninfo_unexecuted_blocks=1 00:23:29.693 00:23:29.693 ' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.693 --rc genhtml_branch_coverage=1 00:23:29.693 --rc genhtml_function_coverage=1 00:23:29.693 --rc genhtml_legend=1 00:23:29.693 --rc geninfo_all_blocks=1 00:23:29.693 --rc geninfo_unexecuted_blocks=1 00:23:29.693 00:23:29.693 ' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:29.693 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:29.694 Cannot find device "nvmf_init_br" 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:29.694 Cannot find device "nvmf_init_br2" 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:29.694 Cannot find device "nvmf_tgt_br" 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:23:29.694 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.694 Cannot find device "nvmf_tgt_br2" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:29.955 Cannot find device "nvmf_init_br" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:29.955 Cannot find device "nvmf_init_br2" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:29.955 Cannot find device "nvmf_tgt_br" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:29.955 Cannot find device "nvmf_tgt_br2" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:29.955 Cannot find device "nvmf_br" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:29.955 Cannot find device "nvmf_init_if" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:29.955 Cannot find device "nvmf_init_if2" 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.955 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:30.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:30.215 00:23:30.215 --- 10.0.0.3 ping statistics --- 00:23:30.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.215 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:30.215 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:30.215 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:23:30.215 00:23:30.215 --- 10.0.0.4 ping statistics --- 00:23:30.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.215 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:30.215 00:23:30.215 --- 10.0.0.1 ping statistics --- 00:23:30.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.215 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:30.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:23:30.215 00:23:30.215 --- 10.0.0.2 ping statistics --- 00:23:30.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.215 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=102906 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 102906 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 102906 ']' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.215 18:43:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:30.215 [2024-12-14 18:43:45.846499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:30.215 [2024-12-14 18:43:45.846592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.474 [2024-12-14 18:43:45.990900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.474 [2024-12-14 18:43:46.081818] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.474 [2024-12-14 18:43:46.081886] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.474 [2024-12-14 18:43:46.081900] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.474 [2024-12-14 18:43:46.081910] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.474 [2024-12-14 18:43:46.081919] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.474 [2024-12-14 18:43:46.081952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 [2024-12-14 18:43:46.938519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 Malloc0 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:31.410 [2024-12-14 18:43:46.986139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=102955 00:23:31.410 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:31.411 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=102956 00:23:31.411 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:31.411 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=102957 00:23:31.411 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:31.411 18:43:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 102955 00:23:31.669 [2024-12-14 18:43:47.176511] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:31.669 [2024-12-14 18:43:47.197927] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:31.669 [2024-12-14 18:43:47.198864] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:32.606 Initializing NVMe Controllers 00:23:32.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:32.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:32.606 Initialization complete. Launching workers. 00:23:32.606 ======================================================== 00:23:32.606 Latency(us) 00:23:32.606 Device Information : IOPS MiB/s Average min max 00:23:32.606 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3358.97 13.12 297.26 126.08 881.90 00:23:32.606 ======================================================== 00:23:32.606 Total : 3358.97 13.12 297.26 126.08 881.90 00:23:32.606 00:23:32.606 Initializing NVMe Controllers 00:23:32.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:32.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:32.606 Initialization complete. Launching workers. 00:23:32.606 ======================================================== 00:23:32.606 Latency(us) 00:23:32.606 Device Information : IOPS MiB/s Average min max 00:23:32.606 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3290.00 12.85 303.56 184.87 850.91 00:23:32.606 ======================================================== 00:23:32.606 Total : 3290.00 12.85 303.56 184.87 850.91 00:23:32.606 00:23:32.606 Initializing NVMe Controllers 00:23:32.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:32.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:32.606 Initialization complete. Launching workers. 00:23:32.606 ======================================================== 00:23:32.606 Latency(us) 00:23:32.606 Device Information : IOPS MiB/s Average min max 00:23:32.606 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3291.00 12.86 303.45 148.69 852.85 00:23:32.606 ======================================================== 00:23:32.606 Total : 3291.00 12.86 303.45 148.69 852.85 00:23:32.606 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 102956 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 102957 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.606 rmmod nvme_tcp 00:23:32.606 rmmod nvme_fabrics 00:23:32.606 rmmod nvme_keyring 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 102906 ']' 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 102906 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 102906 ']' 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 102906 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.606 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102906 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.865 killing process with pid 102906 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102906' 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 102906 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 102906 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:32.865 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:23:33.124 00:23:33.124 real 0m3.663s 00:23:33.124 user 0m5.654s 00:23:33.124 sys 0m1.500s 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.124 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:33.124 ************************************ 00:23:33.124 END TEST nvmf_control_msg_list 00:23:33.124 ************************************ 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.384 ************************************ 00:23:33.384 START TEST nvmf_wait_for_buf 00:23:33.384 ************************************ 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:33.384 * Looking for test storage... 00:23:33.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:33.384 18:43:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.384 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:33.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.385 --rc genhtml_branch_coverage=1 00:23:33.385 --rc genhtml_function_coverage=1 00:23:33.385 --rc genhtml_legend=1 00:23:33.385 --rc geninfo_all_blocks=1 00:23:33.385 --rc geninfo_unexecuted_blocks=1 00:23:33.385 00:23:33.385 ' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:33.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.385 --rc genhtml_branch_coverage=1 00:23:33.385 --rc genhtml_function_coverage=1 00:23:33.385 --rc genhtml_legend=1 00:23:33.385 --rc geninfo_all_blocks=1 00:23:33.385 --rc geninfo_unexecuted_blocks=1 00:23:33.385 00:23:33.385 ' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:33.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.385 --rc genhtml_branch_coverage=1 00:23:33.385 --rc genhtml_function_coverage=1 00:23:33.385 --rc genhtml_legend=1 00:23:33.385 --rc geninfo_all_blocks=1 00:23:33.385 --rc geninfo_unexecuted_blocks=1 00:23:33.385 00:23:33.385 ' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:33.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.385 --rc genhtml_branch_coverage=1 00:23:33.385 --rc genhtml_function_coverage=1 00:23:33.385 --rc genhtml_legend=1 00:23:33.385 --rc geninfo_all_blocks=1 00:23:33.385 --rc geninfo_unexecuted_blocks=1 00:23:33.385 00:23:33.385 ' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.385 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.386 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.386 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.386 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.386 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:33.644 Cannot find device "nvmf_init_br" 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:33.644 Cannot find device "nvmf_init_br2" 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:33.644 Cannot find device "nvmf_tgt_br" 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.644 Cannot find device "nvmf_tgt_br2" 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:23:33.644 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:33.645 Cannot find device "nvmf_init_br" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:33.645 Cannot find device "nvmf_init_br2" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:33.645 Cannot find device "nvmf_tgt_br" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:33.645 Cannot find device "nvmf_tgt_br2" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:33.645 Cannot find device "nvmf_br" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:33.645 Cannot find device "nvmf_init_if" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:33.645 Cannot find device "nvmf_init_if2" 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:33.645 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:33.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:33.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:33.904 00:23:33.904 --- 10.0.0.3 ping statistics --- 00:23:33.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.904 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:33.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:33.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:23:33.904 00:23:33.904 --- 10.0.0.4 ping statistics --- 00:23:33.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.904 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:33.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:23:33.904 00:23:33.904 --- 10.0.0.1 ping statistics --- 00:23:33.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.904 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:33.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:23:33.904 00:23:33.904 --- 10.0.0.2 ping statistics --- 00:23:33.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.904 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:33.904 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=103197 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 103197 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 103197 ']' 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.905 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:33.905 [2024-12-14 18:43:49.647943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:33.905 [2024-12-14 18:43:49.648030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.163 [2024-12-14 18:43:49.791250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.163 [2024-12-14 18:43:49.879951] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.163 [2024-12-14 18:43:49.880019] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.163 [2024-12-14 18:43:49.880034] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.163 [2024-12-14 18:43:49.880045] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.163 [2024-12-14 18:43:49.880054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.163 [2024-12-14 18:43:49.880089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 Malloc0 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 [2024-12-14 18:43:50.097203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 [2024-12-14 18:43:50.121310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.422 18:43:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:34.680 [2024-12-14 18:43:50.305870] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:36.057 Initializing NVMe Controllers 00:23:36.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:36.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:36.057 Initialization complete. Launching workers. 00:23:36.057 ======================================================== 00:23:36.057 Latency(us) 00:23:36.057 Device Information : IOPS MiB/s Average min max 00:23:36.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.87 15.98 32461.08 8028.01 67021.94 00:23:36.057 ======================================================== 00:23:36.057 Total : 127.87 15.98 32461.08 8028.01 67021.94 00:23:36.057 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.057 rmmod nvme_tcp 00:23:36.057 rmmod nvme_fabrics 00:23:36.057 rmmod nvme_keyring 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 103197 ']' 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 103197 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 103197 ']' 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 103197 00:23:36.057 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103197 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.316 killing process with pid 103197 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103197' 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 103197 00:23:36.316 18:43:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 103197 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:36.316 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:23:36.575 00:23:36.575 real 0m3.362s 00:23:36.575 user 0m2.741s 00:23:36.575 sys 0m0.783s 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:36.575 ************************************ 00:23:36.575 END TEST nvmf_wait_for_buf 00:23:36.575 ************************************ 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.575 18:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.835 ************************************ 00:23:36.835 START TEST nvmf_fuzz 00:23:36.835 ************************************ 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:36.835 * Looking for test storage... 00:23:36.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.835 --rc genhtml_branch_coverage=1 00:23:36.835 --rc genhtml_function_coverage=1 00:23:36.835 --rc genhtml_legend=1 00:23:36.835 --rc geninfo_all_blocks=1 00:23:36.835 --rc geninfo_unexecuted_blocks=1 00:23:36.835 00:23:36.835 ' 00:23:36.835 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.835 --rc genhtml_branch_coverage=1 00:23:36.836 --rc genhtml_function_coverage=1 00:23:36.836 --rc genhtml_legend=1 00:23:36.836 --rc geninfo_all_blocks=1 00:23:36.836 --rc geninfo_unexecuted_blocks=1 00:23:36.836 00:23:36.836 ' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.836 --rc genhtml_branch_coverage=1 00:23:36.836 --rc genhtml_function_coverage=1 00:23:36.836 --rc genhtml_legend=1 00:23:36.836 --rc geninfo_all_blocks=1 00:23:36.836 --rc geninfo_unexecuted_blocks=1 00:23:36.836 00:23:36.836 ' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.836 --rc genhtml_branch_coverage=1 00:23:36.836 --rc genhtml_function_coverage=1 00:23:36.836 --rc genhtml_legend=1 00:23:36.836 --rc geninfo_all_blocks=1 00:23:36.836 --rc geninfo_unexecuted_blocks=1 00:23:36.836 00:23:36.836 ' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.836 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:36.836 Cannot find device "nvmf_init_br" 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:36.836 Cannot find device "nvmf_init_br2" 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:23:36.836 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:36.836 Cannot find device "nvmf_tgt_br" 00:23:36.837 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:23:36.837 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.096 Cannot find device "nvmf_tgt_br2" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:37.096 Cannot find device "nvmf_init_br" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:37.096 Cannot find device "nvmf_init_br2" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:37.096 Cannot find device "nvmf_tgt_br" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:37.096 Cannot find device "nvmf_tgt_br2" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:37.096 Cannot find device "nvmf_br" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:37.096 Cannot find device "nvmf_init_if" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:37.096 Cannot find device "nvmf_init_if2" 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.096 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:37.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:37.355 00:23:37.355 --- 10.0.0.3 ping statistics --- 00:23:37.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.355 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:37.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:37.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:23:37.355 00:23:37.355 --- 10.0.0.4 ping statistics --- 00:23:37.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.355 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:37.355 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:37.355 00:23:37.355 --- 10.0.0.1 ping statistics --- 00:23:37.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.355 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:37.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:23:37.356 00:23:37.356 --- 10.0.0.2 ping statistics --- 00:23:37.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.356 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=103464 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 103464 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 103464 ']' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.356 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 Malloc0 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:23:37.923 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:23:38.182 Shutting down the fuzz application 00:23:38.182 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:38.441 Shutting down the fuzz application 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.441 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.441 rmmod nvme_tcp 00:23:38.441 rmmod nvme_fabrics 00:23:38.700 rmmod nvme_keyring 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 103464 ']' 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 103464 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 103464 ']' 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 103464 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103464 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:38.700 killing process with pid 103464 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103464' 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 103464 00:23:38.700 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 103464 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:38.959 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:23:39.218 ************************************ 00:23:39.218 END TEST nvmf_fuzz 00:23:39.218 ************************************ 00:23:39.218 00:23:39.218 real 0m2.427s 00:23:39.218 user 0m2.053s 00:23:39.218 sys 0m0.788s 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:39.218 ************************************ 00:23:39.218 START TEST nvmf_multiconnection 00:23:39.218 ************************************ 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:39.218 * Looking for test storage... 00:23:39.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:39.218 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.490 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:39.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.490 --rc genhtml_branch_coverage=1 00:23:39.490 --rc genhtml_function_coverage=1 00:23:39.491 --rc genhtml_legend=1 00:23:39.491 --rc geninfo_all_blocks=1 00:23:39.491 --rc geninfo_unexecuted_blocks=1 00:23:39.491 00:23:39.491 ' 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.491 --rc genhtml_branch_coverage=1 00:23:39.491 --rc genhtml_function_coverage=1 00:23:39.491 --rc genhtml_legend=1 00:23:39.491 --rc geninfo_all_blocks=1 00:23:39.491 --rc geninfo_unexecuted_blocks=1 00:23:39.491 00:23:39.491 ' 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.491 --rc genhtml_branch_coverage=1 00:23:39.491 --rc genhtml_function_coverage=1 00:23:39.491 --rc genhtml_legend=1 00:23:39.491 --rc geninfo_all_blocks=1 00:23:39.491 --rc geninfo_unexecuted_blocks=1 00:23:39.491 00:23:39.491 ' 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.491 --rc genhtml_branch_coverage=1 00:23:39.491 --rc genhtml_function_coverage=1 00:23:39.491 --rc genhtml_legend=1 00:23:39.491 --rc geninfo_all_blocks=1 00:23:39.491 --rc geninfo_unexecuted_blocks=1 00:23:39.491 00:23:39.491 ' 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.491 18:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.491 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:39.491 Cannot find device "nvmf_init_br" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:39.492 Cannot find device "nvmf_init_br2" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:39.492 Cannot find device "nvmf_tgt_br" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.492 Cannot find device "nvmf_tgt_br2" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:39.492 Cannot find device "nvmf_init_br" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:39.492 Cannot find device "nvmf_init_br2" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:39.492 Cannot find device "nvmf_tgt_br" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:39.492 Cannot find device "nvmf_tgt_br2" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:39.492 Cannot find device "nvmf_br" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:39.492 Cannot find device "nvmf_init_if" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:39.492 Cannot find device "nvmf_init_if2" 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:39.492 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:39.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:23:39.824 00:23:39.824 --- 10.0.0.3 ping statistics --- 00:23:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.824 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:39.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:39.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:23:39.824 00:23:39.824 --- 10.0.0.4 ping statistics --- 00:23:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.824 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:23:39.824 00:23:39.824 --- 10.0.0.1 ping statistics --- 00:23:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.824 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:39.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:39.824 00:23:39.824 --- 10.0.0.2 ping statistics --- 00:23:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.824 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=103715 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 103715 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 103715 ']' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.824 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:39.825 [2024-12-14 18:43:55.492893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:39.825 [2024-12-14 18:43:55.492992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.100 [2024-12-14 18:43:55.635215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.100 [2024-12-14 18:43:55.715851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.100 [2024-12-14 18:43:55.715917] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.100 [2024-12-14 18:43:55.715932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.100 [2024-12-14 18:43:55.715943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.100 [2024-12-14 18:43:55.715952] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.100 [2024-12-14 18:43:55.716117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.100 [2024-12-14 18:43:55.716296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.100 [2024-12-14 18:43:55.717015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.100 [2024-12-14 18:43:55.717066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 [2024-12-14 18:43:55.903894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 Malloc1 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 [2024-12-14 18:43:55.969357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 Malloc2 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 Malloc3 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.359 Malloc4 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.359 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 Malloc5 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 Malloc6 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 Malloc7 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.620 Malloc8 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.620 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.620 Malloc9 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 Malloc10 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 Malloc11 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.879 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:23:41.137 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:41.137 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:41.137 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:41.137 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:41.137 18:43:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.041 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:23:43.300 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:43.300 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:43.300 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:43.300 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:43.300 18:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.203 18:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:23:45.462 18:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:45.462 18:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.462 18:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.462 18:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:45.462 18:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.366 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:23:47.625 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:47.625 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:47.625 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.625 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:47.625 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:49.529 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:49.787 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:52.321 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:54.226 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:56.759 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:56.759 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:56.759 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:56.759 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:56.760 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:56.760 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:56.760 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:56.760 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:23:56.760 18:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:56.760 18:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:56.760 18:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:56.760 18:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:56.760 18:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:58.663 18:44:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:01.197 18:44:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:03.100 18:44:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:05.038 18:44:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:05.038 [global] 00:24:05.038 thread=1 00:24:05.038 invalidate=1 00:24:05.038 rw=read 00:24:05.038 time_based=1 00:24:05.038 runtime=10 00:24:05.038 ioengine=libaio 00:24:05.038 direct=1 00:24:05.038 bs=262144 00:24:05.038 iodepth=64 00:24:05.038 norandommap=1 00:24:05.038 numjobs=1 00:24:05.038 00:24:05.038 [job0] 00:24:05.038 filename=/dev/nvme0n1 00:24:05.038 [job1] 00:24:05.038 filename=/dev/nvme10n1 00:24:05.297 [job2] 00:24:05.297 filename=/dev/nvme1n1 00:24:05.297 [job3] 00:24:05.297 filename=/dev/nvme2n1 00:24:05.297 [job4] 00:24:05.297 filename=/dev/nvme3n1 00:24:05.297 [job5] 00:24:05.297 filename=/dev/nvme4n1 00:24:05.297 [job6] 00:24:05.297 filename=/dev/nvme5n1 00:24:05.297 [job7] 00:24:05.297 filename=/dev/nvme6n1 00:24:05.297 [job8] 00:24:05.297 filename=/dev/nvme7n1 00:24:05.297 [job9] 00:24:05.297 filename=/dev/nvme8n1 00:24:05.297 [job10] 00:24:05.298 filename=/dev/nvme9n1 00:24:05.298 Could not set queue depth (nvme0n1) 00:24:05.298 Could not set queue depth (nvme10n1) 00:24:05.298 Could not set queue depth (nvme1n1) 00:24:05.298 Could not set queue depth (nvme2n1) 00:24:05.298 Could not set queue depth (nvme3n1) 00:24:05.298 Could not set queue depth (nvme4n1) 00:24:05.298 Could not set queue depth (nvme5n1) 00:24:05.298 Could not set queue depth (nvme6n1) 00:24:05.298 Could not set queue depth (nvme7n1) 00:24:05.298 Could not set queue depth (nvme8n1) 00:24:05.298 Could not set queue depth (nvme9n1) 00:24:05.556 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.556 fio-3.35 00:24:05.556 Starting 11 threads 00:24:17.774 00:24:17.774 job0: (groupid=0, jobs=1): err= 0: pid=104175: Sat Dec 14 18:44:31 2024 00:24:17.774 read: IOPS=288, BW=72.1MiB/s (75.6MB/s)(729MiB/10114msec) 00:24:17.774 slat (usec): min=22, max=197224, avg=3428.85, stdev=14701.36 00:24:17.774 clat (msec): min=26, max=368, avg=218.14, stdev=54.22 00:24:17.774 lat (msec): min=27, max=440, avg=221.57, stdev=56.18 00:24:17.774 clat percentiles (msec): 00:24:17.774 | 1.00th=[ 132], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 174], 00:24:17.774 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 205], 60.00th=[ 226], 00:24:17.774 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 296], 95.00th=[ 313], 00:24:17.774 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:24:17.774 | 99.99th=[ 368] 00:24:17.774 bw ( KiB/s): min=46592, max=96063, per=8.75%, avg=72983.75, stdev=15906.86, samples=20 00:24:17.774 iops : min= 182, max= 375, avg=285.05, stdev=62.14, samples=20 00:24:17.774 lat (msec) : 50=0.55%, 100=0.03%, 250=70.13%, 500=29.29% 00:24:17.774 cpu : usr=0.11%, sys=1.30%, ctx=772, majf=0, minf=4097 00:24:17.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:24:17.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.774 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.774 job1: (groupid=0, jobs=1): err= 0: pid=104176: Sat Dec 14 18:44:31 2024 00:24:17.774 read: IOPS=175, BW=43.8MiB/s (45.9MB/s)(445MiB/10155msec) 00:24:17.774 slat (usec): min=23, max=184150, avg=5629.82, stdev=19556.61 00:24:17.774 clat (msec): min=33, max=558, avg=358.94, stdev=60.53 00:24:17.774 lat (msec): min=34, max=564, avg=364.57, stdev=63.96 00:24:17.774 clat percentiles (msec): 00:24:17.774 | 1.00th=[ 36], 5.00th=[ 296], 10.00th=[ 317], 20.00th=[ 334], 00:24:17.774 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 368], 60.00th=[ 376], 00:24:17.774 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:24:17.774 | 99.00th=[ 489], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 558], 00:24:17.775 | 99.99th=[ 558] 00:24:17.775 bw ( KiB/s): min=35328, max=52328, per=5.27%, avg=43938.80, stdev=5225.89, samples=20 00:24:17.775 iops : min= 138, max= 204, avg=171.60, stdev=20.39, samples=20 00:24:17.775 lat (msec) : 50=1.63%, 250=2.02%, 500=95.51%, 750=0.84% 00:24:17.775 cpu : usr=0.11%, sys=0.76%, ctx=422, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job2: (groupid=0, jobs=1): err= 0: pid=104177: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=487, BW=122MiB/s (128MB/s)(1229MiB/10085msec) 00:24:17.775 slat (usec): min=19, max=80058, avg=2014.54, stdev=7824.12 00:24:17.775 clat (msec): min=22, max=238, avg=129.04, stdev=18.55 00:24:17.775 lat (msec): min=22, max=238, avg=131.06, stdev=19.71 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 81], 5.00th=[ 102], 10.00th=[ 111], 20.00th=[ 117], 00:24:17.775 | 30.00th=[ 124], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 132], 00:24:17.775 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 157], 00:24:17.775 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 239], 00:24:17.775 | 99.99th=[ 239] 00:24:17.775 bw ( KiB/s): min=110592, max=137216, per=14.89%, avg=124159.20, stdev=8146.61, samples=20 00:24:17.775 iops : min= 432, max= 536, avg=484.90, stdev=31.74, samples=20 00:24:17.775 lat (msec) : 50=0.63%, 100=3.11%, 250=96.26% 00:24:17.775 cpu : usr=0.18%, sys=2.05%, ctx=675, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=4916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job3: (groupid=0, jobs=1): err= 0: pid=104178: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=395, BW=98.9MiB/s (104MB/s)(999MiB/10106msec) 00:24:17.775 slat (usec): min=16, max=103467, avg=2498.29, stdev=8499.75 00:24:17.775 clat (msec): min=13, max=271, avg=159.08, stdev=34.64 00:24:17.775 lat (msec): min=14, max=271, avg=161.58, stdev=35.64 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 31], 5.00th=[ 82], 10.00th=[ 124], 20.00th=[ 146], 00:24:17.775 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 171], 00:24:17.775 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 199], 00:24:17.775 | 99.00th=[ 222], 99.50th=[ 236], 99.90th=[ 259], 99.95th=[ 271], 00:24:17.775 | 99.99th=[ 271] 00:24:17.775 bw ( KiB/s): min=82432, max=171008, per=12.07%, avg=100645.65, stdev=18278.38, samples=20 00:24:17.775 iops : min= 322, max= 668, avg=393.10, stdev=71.41, samples=20 00:24:17.775 lat (msec) : 20=0.45%, 50=1.63%, 100=6.43%, 250=91.24%, 500=0.25% 00:24:17.775 cpu : usr=0.14%, sys=1.71%, ctx=1144, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=3997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job4: (groupid=0, jobs=1): err= 0: pid=104179: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=183, BW=45.9MiB/s (48.1MB/s)(466MiB/10162msec) 00:24:17.775 slat (usec): min=23, max=160765, avg=5381.10, stdev=18219.95 00:24:17.775 clat (msec): min=20, max=530, avg=342.73, stdev=64.06 00:24:17.775 lat (msec): min=21, max=645, avg=348.11, stdev=66.92 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 41], 5.00th=[ 275], 10.00th=[ 305], 20.00th=[ 321], 00:24:17.775 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:24:17.775 | 70.00th=[ 368], 80.00th=[ 376], 90.00th=[ 388], 95.00th=[ 414], 00:24:17.775 | 99.00th=[ 464], 99.50th=[ 481], 99.90th=[ 485], 99.95th=[ 531], 00:24:17.775 | 99.99th=[ 531] 00:24:17.775 bw ( KiB/s): min=37301, max=67072, per=5.53%, avg=46088.25, stdev=6595.94, samples=20 00:24:17.775 iops : min= 145, max= 262, avg=179.95, stdev=25.84, samples=20 00:24:17.775 lat (msec) : 50=1.02%, 100=2.57%, 250=0.80%, 500=95.55%, 750=0.05% 00:24:17.775 cpu : usr=0.09%, sys=0.84%, ctx=370, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job5: (groupid=0, jobs=1): err= 0: pid=104180: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=444, BW=111MiB/s (116MB/s)(1120MiB/10087msec) 00:24:17.775 slat (usec): min=17, max=110217, avg=2198.17, stdev=8300.56 00:24:17.775 clat (msec): min=24, max=236, avg=141.63, stdev=27.12 00:24:17.775 lat (msec): min=24, max=255, avg=143.83, stdev=27.91 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 73], 5.00th=[ 105], 10.00th=[ 112], 20.00th=[ 120], 00:24:17.775 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 140], 60.00th=[ 150], 00:24:17.775 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 171], 95.00th=[ 180], 00:24:17.775 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 236], 99.95th=[ 236], 00:24:17.775 | 99.99th=[ 236] 00:24:17.775 bw ( KiB/s): min=97987, max=138240, per=13.55%, avg=112989.25, stdev=12003.17, samples=20 00:24:17.775 iops : min= 382, max= 540, avg=441.30, stdev=46.94, samples=20 00:24:17.775 lat (msec) : 50=0.96%, 100=2.92%, 250=96.12% 00:24:17.775 cpu : usr=0.19%, sys=1.82%, ctx=884, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=4479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job6: (groupid=0, jobs=1): err= 0: pid=104181: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=288, BW=72.1MiB/s (75.6MB/s)(730MiB/10116msec) 00:24:17.775 slat (usec): min=23, max=163224, avg=3441.67, stdev=14487.33 00:24:17.775 clat (msec): min=23, max=386, avg=217.97, stdev=48.54 00:24:17.775 lat (msec): min=23, max=404, avg=221.41, stdev=50.13 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 68], 5.00th=[ 148], 10.00th=[ 167], 20.00th=[ 180], 00:24:17.775 | 30.00th=[ 192], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 228], 00:24:17.775 | 70.00th=[ 239], 80.00th=[ 257], 90.00th=[ 279], 95.00th=[ 300], 00:24:17.775 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 384], 99.95th=[ 384], 00:24:17.775 | 99.99th=[ 388] 00:24:17.775 bw ( KiB/s): min=57344, max=87377, per=8.76%, avg=73056.65, stdev=9969.04, samples=20 00:24:17.775 iops : min= 224, max= 341, avg=285.30, stdev=38.91, samples=20 00:24:17.775 lat (msec) : 50=0.55%, 100=1.20%, 250=74.37%, 500=23.88% 00:24:17.775 cpu : usr=0.10%, sys=1.27%, ctx=641, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=2919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job7: (groupid=0, jobs=1): err= 0: pid=104182: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=373, BW=93.3MiB/s (97.9MB/s)(943MiB/10098msec) 00:24:17.775 slat (usec): min=16, max=80721, avg=2517.53, stdev=8466.29 00:24:17.775 clat (msec): min=22, max=450, avg=168.63, stdev=34.89 00:24:17.775 lat (msec): min=22, max=450, avg=171.14, stdev=35.66 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 35], 5.00th=[ 130], 10.00th=[ 144], 20.00th=[ 153], 00:24:17.775 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:24:17.775 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 197], 95.00th=[ 209], 00:24:17.775 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 405], 99.95th=[ 405], 00:24:17.775 | 99.99th=[ 451] 00:24:17.775 bw ( KiB/s): min=79712, max=107520, per=11.38%, avg=94875.35, stdev=7712.36, samples=20 00:24:17.775 iops : min= 311, max= 420, avg=370.55, stdev=30.15, samples=20 00:24:17.775 lat (msec) : 50=1.03%, 100=1.96%, 250=95.62%, 500=1.38% 00:24:17.775 cpu : usr=0.19%, sys=1.55%, ctx=990, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=3770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job8: (groupid=0, jobs=1): err= 0: pid=104183: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=181, BW=45.4MiB/s (47.6MB/s)(462MiB/10162msec) 00:24:17.775 slat (usec): min=22, max=228903, avg=5432.10, stdev=19100.34 00:24:17.775 clat (msec): min=22, max=507, avg=345.94, stdev=59.35 00:24:17.775 lat (msec): min=22, max=564, avg=351.37, stdev=62.76 00:24:17.775 clat percentiles (msec): 00:24:17.775 | 1.00th=[ 142], 5.00th=[ 205], 10.00th=[ 300], 20.00th=[ 321], 00:24:17.775 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:24:17.775 | 70.00th=[ 376], 80.00th=[ 380], 90.00th=[ 397], 95.00th=[ 414], 00:24:17.775 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 506], 99.95th=[ 506], 00:24:17.775 | 99.99th=[ 506] 00:24:17.775 bw ( KiB/s): min=33792, max=58368, per=5.47%, avg=45633.10, stdev=7328.79, samples=20 00:24:17.775 iops : min= 132, max= 228, avg=178.15, stdev=28.61, samples=20 00:24:17.775 lat (msec) : 50=0.87%, 250=4.82%, 500=94.10%, 750=0.22% 00:24:17.775 cpu : usr=0.04%, sys=0.88%, ctx=395, majf=0, minf=4097 00:24:17.775 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:24:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.775 issued rwts: total=1847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.775 job9: (groupid=0, jobs=1): err= 0: pid=104184: Sat Dec 14 18:44:31 2024 00:24:17.775 read: IOPS=278, BW=69.6MiB/s (73.0MB/s)(704MiB/10108msec) 00:24:17.776 slat (usec): min=22, max=230915, avg=3546.04, stdev=14749.09 00:24:17.776 clat (msec): min=44, max=677, avg=225.87, stdev=60.47 00:24:17.776 lat (msec): min=44, max=704, avg=229.41, stdev=61.97 00:24:17.776 clat percentiles (msec): 00:24:17.776 | 1.00th=[ 148], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 188], 00:24:17.776 | 30.00th=[ 201], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 226], 00:24:17.776 | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 275], 95.00th=[ 296], 00:24:17.776 | 99.00th=[ 514], 99.50th=[ 558], 99.90th=[ 676], 99.95th=[ 676], 00:24:17.776 | 99.99th=[ 676] 00:24:17.776 bw ( KiB/s): min= 7695, max=84992, per=8.45%, avg=70462.65, stdev=17297.28, samples=20 00:24:17.776 iops : min= 30, max= 332, avg=275.15, stdev=67.57, samples=20 00:24:17.776 lat (msec) : 50=0.18%, 100=0.04%, 250=82.21%, 500=15.59%, 750=1.99% 00:24:17.776 cpu : usr=0.15%, sys=1.21%, ctx=740, majf=0, minf=4097 00:24:17.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:24:17.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.776 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.776 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.776 job10: (groupid=0, jobs=1): err= 0: pid=104185: Sat Dec 14 18:44:31 2024 00:24:17.776 read: IOPS=177, BW=44.3MiB/s (46.4MB/s)(450MiB/10160msec) 00:24:17.776 slat (usec): min=17, max=175899, avg=5454.45, stdev=17974.11 00:24:17.776 clat (msec): min=40, max=508, avg=355.10, stdev=50.32 00:24:17.776 lat (msec): min=40, max=573, avg=360.55, stdev=53.52 00:24:17.776 clat percentiles (msec): 00:24:17.776 | 1.00th=[ 194], 5.00th=[ 292], 10.00th=[ 313], 20.00th=[ 330], 00:24:17.776 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 368], 00:24:17.776 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 405], 95.00th=[ 418], 00:24:17.776 | 99.00th=[ 443], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:24:17.776 | 99.99th=[ 510] 00:24:17.776 bw ( KiB/s): min=38400, max=57856, per=5.33%, avg=44433.00, stdev=5344.79, samples=20 00:24:17.776 iops : min= 150, max= 226, avg=173.50, stdev=20.80, samples=20 00:24:17.776 lat (msec) : 50=0.67%, 100=0.06%, 250=1.83%, 500=97.11%, 750=0.33% 00:24:17.776 cpu : usr=0.07%, sys=0.85%, ctx=444, majf=0, minf=4097 00:24:17.776 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:24:17.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.776 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.776 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.776 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.776 00:24:17.776 Run status group 0 (all jobs): 00:24:17.776 READ: bw=814MiB/s (854MB/s), 43.8MiB/s-122MiB/s (45.9MB/s-128MB/s), io=8276MiB (8678MB), run=10085-10162msec 00:24:17.776 00:24:17.776 Disk stats (read/write): 00:24:17.776 nvme0n1: ios=5705/0, merge=0/0, ticks=1232992/0, in_queue=1232992, util=97.53% 00:24:17.776 nvme10n1: ios=3433/0, merge=0/0, ticks=1223559/0, in_queue=1223559, util=97.43% 00:24:17.776 nvme1n1: ios=9705/0, merge=0/0, ticks=1237725/0, in_queue=1237725, util=97.62% 00:24:17.776 nvme2n1: ios=7867/0, merge=0/0, ticks=1237240/0, in_queue=1237240, util=97.99% 00:24:17.776 nvme3n1: ios=3604/0, merge=0/0, ticks=1226090/0, in_queue=1226090, util=98.01% 00:24:17.776 nvme4n1: ios=8830/0, merge=0/0, ticks=1236886/0, in_queue=1236886, util=98.15% 00:24:17.776 nvme5n1: ios=5710/0, merge=0/0, ticks=1237932/0, in_queue=1237932, util=98.40% 00:24:17.776 nvme6n1: ios=7412/0, merge=0/0, ticks=1238797/0, in_queue=1238797, util=98.39% 00:24:17.776 nvme7n1: ios=3567/0, merge=0/0, ticks=1233629/0, in_queue=1233629, util=98.70% 00:24:17.776 nvme8n1: ios=5504/0, merge=0/0, ticks=1235077/0, in_queue=1235077, util=98.83% 00:24:17.776 nvme9n1: ios=3472/0, merge=0/0, ticks=1227012/0, in_queue=1227012, util=99.01% 00:24:17.776 18:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:17.776 [global] 00:24:17.776 thread=1 00:24:17.776 invalidate=1 00:24:17.776 rw=randwrite 00:24:17.776 time_based=1 00:24:17.776 runtime=10 00:24:17.776 ioengine=libaio 00:24:17.776 direct=1 00:24:17.776 bs=262144 00:24:17.776 iodepth=64 00:24:17.776 norandommap=1 00:24:17.776 numjobs=1 00:24:17.776 00:24:17.776 [job0] 00:24:17.776 filename=/dev/nvme0n1 00:24:17.776 [job1] 00:24:17.776 filename=/dev/nvme10n1 00:24:17.776 [job2] 00:24:17.776 filename=/dev/nvme1n1 00:24:17.776 [job3] 00:24:17.776 filename=/dev/nvme2n1 00:24:17.776 [job4] 00:24:17.776 filename=/dev/nvme3n1 00:24:17.776 [job5] 00:24:17.776 filename=/dev/nvme4n1 00:24:17.776 [job6] 00:24:17.776 filename=/dev/nvme5n1 00:24:17.776 [job7] 00:24:17.776 filename=/dev/nvme6n1 00:24:17.776 [job8] 00:24:17.776 filename=/dev/nvme7n1 00:24:17.776 [job9] 00:24:17.776 filename=/dev/nvme8n1 00:24:17.776 [job10] 00:24:17.776 filename=/dev/nvme9n1 00:24:17.776 Could not set queue depth (nvme0n1) 00:24:17.776 Could not set queue depth (nvme10n1) 00:24:17.776 Could not set queue depth (nvme1n1) 00:24:17.776 Could not set queue depth (nvme2n1) 00:24:17.776 Could not set queue depth (nvme3n1) 00:24:17.776 Could not set queue depth (nvme4n1) 00:24:17.776 Could not set queue depth (nvme5n1) 00:24:17.776 Could not set queue depth (nvme6n1) 00:24:17.776 Could not set queue depth (nvme7n1) 00:24:17.776 Could not set queue depth (nvme8n1) 00:24:17.776 Could not set queue depth (nvme9n1) 00:24:17.776 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:17.776 fio-3.35 00:24:17.776 Starting 11 threads 00:24:27.761 00:24:27.761 job0: (groupid=0, jobs=1): err= 0: pid=104382: Sat Dec 14 18:44:42 2024 00:24:27.761 write: IOPS=153, BW=38.5MiB/s (40.4MB/s)(394MiB/10228msec); 0 zone resets 00:24:27.761 slat (usec): min=25, max=106817, avg=6256.05, stdev=12160.87 00:24:27.761 clat (msec): min=8, max=653, avg=408.95, stdev=86.50 00:24:27.761 lat (msec): min=8, max=653, avg=415.21, stdev=87.27 00:24:27.761 clat percentiles (msec): 00:24:27.761 | 1.00th=[ 32], 5.00th=[ 186], 10.00th=[ 355], 20.00th=[ 405], 00:24:27.761 | 30.00th=[ 418], 40.00th=[ 426], 50.00th=[ 435], 60.00th=[ 439], 00:24:27.761 | 70.00th=[ 443], 80.00th=[ 447], 90.00th=[ 460], 95.00th=[ 468], 00:24:27.761 | 99.00th=[ 550], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:24:27.761 | 99.99th=[ 651] 00:24:27.761 bw ( KiB/s): min=34816, max=65024, per=3.80%, avg=38707.20, stdev=6284.76, samples=20 00:24:27.761 iops : min= 136, max= 254, avg=151.20, stdev=24.55, samples=20 00:24:27.761 lat (msec) : 10=0.32%, 20=0.25%, 50=0.70%, 100=1.84%, 250=3.17% 00:24:27.761 lat (msec) : 500=92.00%, 750=1.71% 00:24:27.761 cpu : usr=0.44%, sys=0.53%, ctx=1856, majf=0, minf=1 00:24:27.761 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:24:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.761 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.761 issued rwts: total=0,1575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.761 job1: (groupid=0, jobs=1): err= 0: pid=104383: Sat Dec 14 18:44:42 2024 00:24:27.761 write: IOPS=517, BW=129MiB/s (136MB/s)(1307MiB/10103msec); 0 zone resets 00:24:27.761 slat (usec): min=19, max=21298, avg=1907.54, stdev=3266.01 00:24:27.761 clat (msec): min=15, max=220, avg=121.77, stdev=12.16 00:24:27.761 lat (msec): min=15, max=220, avg=123.68, stdev=11.89 00:24:27.761 clat percentiles (msec): 00:24:27.761 | 1.00th=[ 111], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 117], 00:24:27.761 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 122], 60.00th=[ 123], 00:24:27.761 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 127], 95.00th=[ 129], 00:24:27.761 | 99.00th=[ 188], 99.50th=[ 199], 99.90th=[ 213], 99.95th=[ 213], 00:24:27.761 | 99.99th=[ 222] 00:24:27.761 bw ( KiB/s): min=106496, max=135680, per=12.97%, avg=132159.45, stdev=6233.00, samples=20 00:24:27.761 iops : min= 416, max= 530, avg=516.20, stdev=24.34, samples=20 00:24:27.761 lat (msec) : 20=0.08%, 50=0.31%, 100=0.33%, 250=99.29% 00:24:27.761 cpu : usr=1.02%, sys=1.78%, ctx=6253, majf=0, minf=1 00:24:27.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.761 issued rwts: total=0,5226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.761 job2: (groupid=0, jobs=1): err= 0: pid=104395: Sat Dec 14 18:44:42 2024 00:24:27.761 write: IOPS=145, BW=36.3MiB/s (38.1MB/s)(371MiB/10216msec); 0 zone resets 00:24:27.761 slat (usec): min=19, max=115744, avg=6732.70, stdev=13005.50 00:24:27.761 clat (msec): min=88, max=675, avg=433.63, stdev=61.42 00:24:27.761 lat (msec): min=88, max=675, avg=440.36, stdev=61.08 00:24:27.761 clat percentiles (msec): 00:24:27.761 | 1.00th=[ 136], 5.00th=[ 330], 10.00th=[ 393], 20.00th=[ 422], 00:24:27.761 | 30.00th=[ 430], 40.00th=[ 439], 50.00th=[ 447], 60.00th=[ 456], 00:24:27.761 | 70.00th=[ 460], 80.00th=[ 464], 90.00th=[ 468], 95.00th=[ 472], 00:24:27.761 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 676], 99.95th=[ 676], 00:24:27.761 | 99.99th=[ 676] 00:24:27.761 bw ( KiB/s): min=33280, max=38912, per=3.57%, avg=36373.90, stdev=1235.83, samples=20 00:24:27.761 iops : min= 130, max= 152, avg=142.05, stdev= 4.82, samples=20 00:24:27.761 lat (msec) : 100=0.27%, 250=2.76%, 500=95.22%, 750=1.75% 00:24:27.761 cpu : usr=0.48%, sys=0.41%, ctx=1018, majf=0, minf=1 00:24:27.761 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.8% 00:24:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.761 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.761 issued rwts: total=0,1484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.761 job3: (groupid=0, jobs=1): err= 0: pid=104396: Sat Dec 14 18:44:42 2024 00:24:27.761 write: IOPS=517, BW=129MiB/s (136MB/s)(1307MiB/10097msec); 0 zone resets 00:24:27.761 slat (usec): min=18, max=14237, avg=1907.71, stdev=3266.14 00:24:27.761 clat (msec): min=8, max=217, avg=121.65, stdev=11.60 00:24:27.761 lat (msec): min=8, max=217, avg=123.56, stdev=11.30 00:24:27.761 clat percentiles (msec): 00:24:27.761 | 1.00th=[ 111], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 117], 00:24:27.761 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 122], 60.00th=[ 123], 00:24:27.761 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 127], 95.00th=[ 129], 00:24:27.761 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 211], 99.95th=[ 211], 00:24:27.761 | 99.99th=[ 218] 00:24:27.761 bw ( KiB/s): min=109568, max=137216, per=12.98%, avg=132198.40, stdev=5710.10, samples=20 00:24:27.761 iops : min= 428, max= 536, avg=516.40, stdev=22.31, samples=20 00:24:27.761 lat (msec) : 10=0.04%, 20=0.08%, 50=0.23%, 100=0.34%, 250=99.31% 00:24:27.761 cpu : usr=1.02%, sys=1.36%, ctx=6372, majf=0, minf=1 00:24:27.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.761 issued rwts: total=0,5227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.761 job4: (groupid=0, jobs=1): err= 0: pid=104397: Sat Dec 14 18:44:42 2024 00:24:27.761 write: IOPS=1069, BW=267MiB/s (280MB/s)(2686MiB/10051msec); 0 zone resets 00:24:27.761 slat (usec): min=26, max=68369, avg=916.42, stdev=1707.16 00:24:27.761 clat (msec): min=3, max=184, avg=58.93, stdev= 8.45 00:24:27.761 lat (msec): min=3, max=184, avg=59.84, stdev= 8.51 00:24:27.761 clat percentiles (msec): 00:24:27.761 | 1.00th=[ 55], 5.00th=[ 56], 10.00th=[ 56], 20.00th=[ 57], 00:24:27.761 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 59], 00:24:27.761 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 62], 00:24:27.761 | 99.00th=[ 112], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:24:27.761 | 99.99th=[ 153] 00:24:27.761 bw ( KiB/s): min=190976, max=283648, per=26.84%, avg=273459.20, stdev=19648.57, samples=20 00:24:27.762 iops : min= 746, max= 1108, avg=1068.20, stdev=76.75, samples=20 00:24:27.762 lat (msec) : 4=0.02%, 10=0.02%, 20=0.06%, 50=0.25%, 100=98.45% 00:24:27.762 lat (msec) : 250=1.21% 00:24:27.762 cpu : usr=2.67%, sys=3.18%, ctx=13583, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,10745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job5: (groupid=0, jobs=1): err= 0: pid=104398: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=160, BW=40.2MiB/s (42.1MB/s)(410MiB/10212msec); 0 zone resets 00:24:27.762 slat (usec): min=21, max=100729, avg=6011.35, stdev=11647.86 00:24:27.762 clat (msec): min=54, max=628, avg=391.90, stdev=86.43 00:24:27.762 lat (msec): min=54, max=629, avg=397.91, stdev=87.25 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 84], 5.00th=[ 134], 10.00th=[ 338], 20.00th=[ 388], 00:24:27.762 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 418], 60.00th=[ 426], 00:24:27.762 | 70.00th=[ 430], 80.00th=[ 435], 90.00th=[ 443], 95.00th=[ 451], 00:24:27.762 | 99.00th=[ 527], 99.50th=[ 575], 99.90th=[ 625], 99.95th=[ 625], 00:24:27.762 | 99.99th=[ 625] 00:24:27.762 bw ( KiB/s): min=36864, max=70144, per=3.97%, avg=40396.80, stdev=7196.62, samples=20 00:24:27.762 iops : min= 144, max= 274, avg=157.80, stdev=28.11, samples=20 00:24:27.762 lat (msec) : 100=2.01%, 250=6.15%, 500=90.74%, 750=1.10% 00:24:27.762 cpu : usr=0.39%, sys=0.51%, ctx=1752, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.2% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,1641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job6: (groupid=0, jobs=1): err= 0: pid=104399: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=153, BW=38.4MiB/s (40.3MB/s)(393MiB/10235msec); 0 zone resets 00:24:27.762 slat (usec): min=27, max=118936, avg=6364.02, stdev=12035.27 00:24:27.762 clat (msec): min=3, max=647, avg=410.07, stdev=76.21 00:24:27.762 lat (msec): min=3, max=647, avg=416.44, stdev=76.57 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 46], 5.00th=[ 275], 10.00th=[ 368], 20.00th=[ 397], 00:24:27.762 | 30.00th=[ 414], 40.00th=[ 422], 50.00th=[ 426], 60.00th=[ 430], 00:24:27.762 | 70.00th=[ 439], 80.00th=[ 443], 90.00th=[ 456], 95.00th=[ 468], 00:24:27.762 | 99.00th=[ 542], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:24:27.762 | 99.99th=[ 651] 00:24:27.762 bw ( KiB/s): min=34746, max=55296, per=3.79%, avg=38601.30, stdev=4220.17, samples=20 00:24:27.762 iops : min= 135, max= 216, avg=150.75, stdev=16.52, samples=20 00:24:27.762 lat (msec) : 4=0.25%, 10=0.25%, 50=0.51%, 100=1.02%, 250=2.61% 00:24:27.762 lat (msec) : 500=93.77%, 750=1.59% 00:24:27.762 cpu : usr=0.42%, sys=0.53%, ctx=2197, majf=0, minf=2 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,1572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job7: (groupid=0, jobs=1): err= 0: pid=104400: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=144, BW=36.2MiB/s (37.9MB/s)(370MiB/10224msec); 0 zone resets 00:24:27.762 slat (usec): min=24, max=250863, avg=6552.13, stdev=14409.21 00:24:27.762 clat (msec): min=26, max=749, avg=435.64, stdev=93.44 00:24:27.762 lat (msec): min=26, max=749, avg=442.19, stdev=94.13 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 61], 5.00th=[ 253], 10.00th=[ 372], 20.00th=[ 409], 00:24:27.762 | 30.00th=[ 426], 40.00th=[ 435], 50.00th=[ 447], 60.00th=[ 451], 00:24:27.762 | 70.00th=[ 456], 80.00th=[ 464], 90.00th=[ 502], 95.00th=[ 575], 00:24:27.762 | 99.00th=[ 676], 99.50th=[ 684], 99.90th=[ 751], 99.95th=[ 751], 00:24:27.762 | 99.99th=[ 751] 00:24:27.762 bw ( KiB/s): min=28672, max=51200, per=3.56%, avg=36249.60, stdev=4452.99, samples=20 00:24:27.762 iops : min= 112, max= 200, avg=141.60, stdev=17.39, samples=20 00:24:27.762 lat (msec) : 50=0.81%, 100=0.88%, 250=3.11%, 500=84.04%, 750=11.16% 00:24:27.762 cpu : usr=0.30%, sys=0.48%, ctx=1857, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,1479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job8: (groupid=0, jobs=1): err= 0: pid=104401: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=502, BW=126MiB/s (132MB/s)(1269MiB/10100msec); 0 zone resets 00:24:27.762 slat (usec): min=19, max=41492, avg=1965.18, stdev=3388.62 00:24:27.762 clat (msec): min=41, max=227, avg=125.39, stdev= 8.62 00:24:27.762 lat (msec): min=41, max=227, avg=127.35, stdev= 8.07 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 122], 00:24:27.762 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 127], 00:24:27.762 | 70.00th=[ 128], 80.00th=[ 129], 90.00th=[ 130], 95.00th=[ 132], 00:24:27.762 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 220], 99.95th=[ 220], 00:24:27.762 | 99.99th=[ 228] 00:24:27.762 bw ( KiB/s): min=112640, max=133120, per=12.59%, avg=128281.60, stdev=4121.10, samples=20 00:24:27.762 iops : min= 440, max= 520, avg=501.10, stdev=16.10, samples=20 00:24:27.762 lat (msec) : 50=0.08%, 100=0.34%, 250=99.59% 00:24:27.762 cpu : usr=0.93%, sys=1.49%, ctx=6909, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job9: (groupid=0, jobs=1): err= 0: pid=104402: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=502, BW=126MiB/s (132MB/s)(1270MiB/10102msec); 0 zone resets 00:24:27.762 slat (usec): min=19, max=18442, avg=1962.87, stdev=3360.19 00:24:27.762 clat (msec): min=21, max=227, avg=125.29, stdev= 9.36 00:24:27.762 lat (msec): min=21, max=227, avg=127.25, stdev= 8.86 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 122], 00:24:27.762 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 126], 60.00th=[ 127], 00:24:27.762 | 70.00th=[ 128], 80.00th=[ 129], 90.00th=[ 130], 95.00th=[ 132], 00:24:27.762 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 220], 99.95th=[ 220], 00:24:27.762 | 99.99th=[ 228] 00:24:27.762 bw ( KiB/s): min=114688, max=133120, per=12.61%, avg=128409.60, stdev=3804.72, samples=20 00:24:27.762 iops : min= 448, max= 520, avg=501.60, stdev=14.86, samples=20 00:24:27.762 lat (msec) : 50=0.24%, 100=0.26%, 250=99.51% 00:24:27.762 cpu : usr=1.00%, sys=1.65%, ctx=4198, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 job10: (groupid=0, jobs=1): err= 0: pid=104403: Sat Dec 14 18:44:42 2024 00:24:27.762 write: IOPS=159, BW=39.8MiB/s (41.7MB/s)(406MiB/10210msec); 0 zone resets 00:24:27.762 slat (usec): min=18, max=72403, avg=6054.45, stdev=11597.51 00:24:27.762 clat (msec): min=17, max=651, avg=395.85, stdev=95.57 00:24:27.762 lat (msec): min=17, max=651, avg=401.90, stdev=96.71 00:24:27.762 clat percentiles (msec): 00:24:27.762 | 1.00th=[ 65], 5.00th=[ 136], 10.00th=[ 257], 20.00th=[ 380], 00:24:27.762 | 30.00th=[ 397], 40.00th=[ 418], 50.00th=[ 426], 60.00th=[ 430], 00:24:27.762 | 70.00th=[ 443], 80.00th=[ 456], 90.00th=[ 464], 95.00th=[ 468], 00:24:27.762 | 99.00th=[ 527], 99.50th=[ 584], 99.90th=[ 651], 99.95th=[ 651], 00:24:27.762 | 99.99th=[ 651] 00:24:27.762 bw ( KiB/s): min=34816, max=80545, per=3.93%, avg=39995.25, stdev=9813.09, samples=20 00:24:27.762 iops : min= 136, max= 314, avg=156.20, stdev=38.20, samples=20 00:24:27.762 lat (msec) : 20=0.25%, 50=0.49%, 100=1.54%, 250=7.69%, 500=88.92% 00:24:27.762 lat (msec) : 750=1.11% 00:24:27.762 cpu : usr=0.51%, sys=0.47%, ctx=1889, majf=0, minf=1 00:24:27.762 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:24:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:27.762 issued rwts: total=0,1625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.762 00:24:27.762 Run status group 0 (all jobs): 00:24:27.762 WRITE: bw=995MiB/s (1043MB/s), 36.2MiB/s-267MiB/s (37.9MB/s-280MB/s), io=9.94GiB (10.7GB), run=10051-10235msec 00:24:27.762 00:24:27.762 Disk stats (read/write): 00:24:27.762 nvme0n1: ios=49/3028, merge=0/0, ticks=44/1196056, in_queue=1196100, util=97.79% 00:24:27.762 nvme10n1: ios=49/10322, merge=0/0, ticks=49/1215530, in_queue=1215579, util=98.06% 00:24:27.762 nvme1n1: ios=35/2842, merge=0/0, ticks=31/1190947, in_queue=1190978, util=97.83% 00:24:27.762 nvme2n1: ios=24/10322, merge=0/0, ticks=53/1214634, in_queue=1214687, util=98.06% 00:24:27.762 nvme3n1: ios=8/21317, merge=0/0, ticks=24/1215108, in_queue=1215132, util=97.93% 00:24:27.762 nvme4n1: ios=0/3151, merge=0/0, ticks=0/1194559, in_queue=1194559, util=97.98% 00:24:27.762 nvme5n1: ios=0/3020, merge=0/0, ticks=0/1197027, in_queue=1197027, util=98.38% 00:24:27.762 nvme6n1: ios=0/2832, merge=0/0, ticks=0/1196258, in_queue=1196258, util=98.35% 00:24:27.762 nvme7n1: ios=0/10006, merge=0/0, ticks=0/1213008, in_queue=1213008, util=98.58% 00:24:27.762 nvme8n1: ios=0/10019, merge=0/0, ticks=0/1213977, in_queue=1213977, util=98.72% 00:24:27.762 nvme9n1: ios=0/3120, merge=0/0, ticks=0/1194507, in_queue=1194507, util=98.66% 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:27.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:27.762 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:27.763 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.763 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:27.764 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.764 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:28.023 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:28.023 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.023 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.023 rmmod nvme_tcp 00:24:28.023 rmmod nvme_fabrics 00:24:28.282 rmmod nvme_keyring 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 103715 ']' 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 103715 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 103715 ']' 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 103715 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103715 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.282 killing process with pid 103715 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103715' 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 103715 00:24:28.282 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 103715 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.850 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.851 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.851 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.851 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:24:29.109 00:24:29.109 real 0m49.824s 00:24:29.109 user 2m54.972s 00:24:29.109 sys 0m19.071s 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:29.109 ************************************ 00:24:29.109 END TEST nvmf_multiconnection 00:24:29.109 ************************************ 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:29.109 ************************************ 00:24:29.109 START TEST nvmf_initiator_timeout 00:24:29.109 ************************************ 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:29.109 * Looking for test storage... 00:24:29.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:29.109 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:29.110 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:29.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.369 --rc genhtml_branch_coverage=1 00:24:29.369 --rc genhtml_function_coverage=1 00:24:29.369 --rc genhtml_legend=1 00:24:29.369 --rc geninfo_all_blocks=1 00:24:29.369 --rc geninfo_unexecuted_blocks=1 00:24:29.369 00:24:29.369 ' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:29.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.369 --rc genhtml_branch_coverage=1 00:24:29.369 --rc genhtml_function_coverage=1 00:24:29.369 --rc genhtml_legend=1 00:24:29.369 --rc geninfo_all_blocks=1 00:24:29.369 --rc geninfo_unexecuted_blocks=1 00:24:29.369 00:24:29.369 ' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:29.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.369 --rc genhtml_branch_coverage=1 00:24:29.369 --rc genhtml_function_coverage=1 00:24:29.369 --rc genhtml_legend=1 00:24:29.369 --rc geninfo_all_blocks=1 00:24:29.369 --rc geninfo_unexecuted_blocks=1 00:24:29.369 00:24:29.369 ' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:29.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.369 --rc genhtml_branch_coverage=1 00:24:29.369 --rc genhtml_function_coverage=1 00:24:29.369 --rc genhtml_legend=1 00:24:29.369 --rc geninfo_all_blocks=1 00:24:29.369 --rc geninfo_unexecuted_blocks=1 00:24:29.369 00:24:29.369 ' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.369 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:29.370 Cannot find device "nvmf_init_br" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:29.370 Cannot find device "nvmf_init_br2" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:29.370 Cannot find device "nvmf_tgt_br" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:29.370 Cannot find device "nvmf_tgt_br2" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:29.370 Cannot find device "nvmf_init_br" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:29.370 Cannot find device "nvmf_init_br2" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:29.370 Cannot find device "nvmf_tgt_br" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:29.370 Cannot find device "nvmf_tgt_br2" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:29.370 Cannot find device "nvmf_br" 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:24:29.370 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:29.370 Cannot find device "nvmf_init_if" 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:29.370 Cannot find device "nvmf_init_if2" 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:29.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:29.370 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:29.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:24:29.629 00:24:29.629 --- 10.0.0.3 ping statistics --- 00:24:29.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.629 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:29.629 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:29.629 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:29.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:24:29.630 00:24:29.630 --- 10.0.0.4 ping statistics --- 00:24:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.630 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:24:29.630 00:24:29.630 --- 10.0.0.1 ping statistics --- 00:24:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.630 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:29.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:24:29.630 00:24:29.630 --- 10.0.0.2 ping statistics --- 00:24:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.630 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=104824 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 104824 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 104824 ']' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.630 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.888 [2024-12-14 18:44:45.391194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:29.888 [2024-12-14 18:44:45.391295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.888 [2024-12-14 18:44:45.536026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.888 [2024-12-14 18:44:45.626064] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.888 [2024-12-14 18:44:45.626163] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.888 [2024-12-14 18:44:45.626190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.888 [2024-12-14 18:44:45.626212] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.888 [2024-12-14 18:44:45.626234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.888 [2024-12-14 18:44:45.626392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.888 [2024-12-14 18:44:45.627001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.888 [2024-12-14 18:44:45.627283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.888 [2024-12-14 18:44:45.627290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 Malloc0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 Delay0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 [2024-12-14 18:44:45.859850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.147 [2024-12-14 18:44:45.888334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.147 18:44:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:30.406 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:30.406 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.406 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.406 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:30.406 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=104894 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:32.941 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:32.941 [global] 00:24:32.941 thread=1 00:24:32.941 invalidate=1 00:24:32.941 rw=write 00:24:32.941 time_based=1 00:24:32.941 runtime=60 00:24:32.941 ioengine=libaio 00:24:32.941 direct=1 00:24:32.941 bs=4096 00:24:32.941 iodepth=1 00:24:32.941 norandommap=0 00:24:32.941 numjobs=1 00:24:32.941 00:24:32.941 verify_dump=1 00:24:32.941 verify_backlog=512 00:24:32.941 verify_state_save=0 00:24:32.941 do_verify=1 00:24:32.941 verify=crc32c-intel 00:24:32.941 [job0] 00:24:32.941 filename=/dev/nvme0n1 00:24:32.941 Could not set queue depth (nvme0n1) 00:24:32.941 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:32.941 fio-3.35 00:24:32.941 Starting 1 thread 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:35.474 true 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:35.474 true 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:35.474 true 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:35.474 true 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.474 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.761 true 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.761 true 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.761 true 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.761 true 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:38.761 18:44:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 104894 00:25:35.033 00:25:35.033 job0: (groupid=0, jobs=1): err= 0: pid=104915: Sat Dec 14 18:45:48 2024 00:25:35.033 read: IOPS=802, BW=3209KiB/s (3286kB/s)(188MiB/60000msec) 00:25:35.033 slat (usec): min=11, max=11985, avg=14.93, stdev=62.66 00:25:35.033 clat (usec): min=153, max=40633k, avg=1048.09, stdev=185213.80 00:25:35.033 lat (usec): min=164, max=40633k, avg=1063.02, stdev=185213.81 00:25:35.033 clat percentiles (usec): 00:25:35.033 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:25:35.033 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:25:35.033 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:25:35.033 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 392], 00:25:35.033 | 99.99th=[ 2769] 00:25:35.033 write: IOPS=809, BW=3239KiB/s (3316kB/s)(190MiB/60000msec); 0 zone resets 00:25:35.033 slat (usec): min=16, max=708, avg=21.24, stdev= 6.99 00:25:35.033 clat (usec): min=116, max=4395, avg=157.60, stdev=32.85 00:25:35.033 lat (usec): min=134, max=4431, avg=178.84, stdev=34.21 00:25:35.033 clat percentiles (usec): 00:25:35.033 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:25:35.033 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:25:35.033 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 190], 00:25:35.033 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 258], 99.95th=[ 277], 00:25:35.033 | 99.99th=[ 693] 00:25:35.033 bw ( KiB/s): min= 3056, max=12288, per=100.00%, avg=9716.79, stdev=1863.42, samples=39 00:25:35.033 iops : min= 764, max= 3072, avg=2429.18, stdev=465.84, samples=39 00:25:35.033 lat (usec) : 250=98.79%, 500=1.19%, 750=0.01%, 1000=0.01% 00:25:35.033 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:25:35.033 cpu : usr=0.57%, sys=2.16%, ctx=96712, majf=0, minf=5 00:25:35.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.033 issued rwts: total=48128,48579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:35.033 00:25:35.033 Run status group 0 (all jobs): 00:25:35.033 READ: bw=3209KiB/s (3286kB/s), 3209KiB/s-3209KiB/s (3286kB/s-3286kB/s), io=188MiB (197MB), run=60000-60000msec 00:25:35.033 WRITE: bw=3239KiB/s (3316kB/s), 3239KiB/s-3239KiB/s (3316kB/s-3316kB/s), io=190MiB (199MB), run=60000-60000msec 00:25:35.033 00:25:35.033 Disk stats (read/write): 00:25:35.033 nvme0n1: ios=48275/48128, merge=0/0, ticks=10536/8439, in_queue=18975, util=99.53% 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:35.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:35.033 nvmf hotplug test: fio successful as expected 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.033 rmmod nvme_tcp 00:25:35.033 rmmod nvme_fabrics 00:25:35.033 rmmod nvme_keyring 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 104824 ']' 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 104824 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 104824 ']' 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 104824 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:35.033 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104824 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104824' 00:25:35.034 killing process with pid 104824 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 104824 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 104824 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:35.034 18:45:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:25:35.034 00:25:35.034 real 1m4.397s 00:25:35.034 user 4m4.251s 00:25:35.034 sys 0m8.553s 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.034 ************************************ 00:25:35.034 END TEST nvmf_initiator_timeout 00:25:35.034 ************************************ 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:35.034 00:25:35.034 real 13m51.660s 00:25:35.034 user 42m19.032s 00:25:35.034 sys 2m26.622s 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:35.034 18:45:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:35.034 ************************************ 00:25:35.034 END TEST nvmf_target_extra 00:25:35.034 ************************************ 00:25:35.034 18:45:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:35.034 18:45:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:35.034 18:45:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.034 18:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.034 ************************************ 00:25:35.034 START TEST nvmf_host 00:25:35.034 ************************************ 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:35.034 * Looking for test storage... 00:25:35.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:35.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.034 --rc genhtml_branch_coverage=1 00:25:35.034 --rc genhtml_function_coverage=1 00:25:35.034 --rc genhtml_legend=1 00:25:35.034 --rc geninfo_all_blocks=1 00:25:35.034 --rc geninfo_unexecuted_blocks=1 00:25:35.034 00:25:35.034 ' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:35.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.034 --rc genhtml_branch_coverage=1 00:25:35.034 --rc genhtml_function_coverage=1 00:25:35.034 --rc genhtml_legend=1 00:25:35.034 --rc geninfo_all_blocks=1 00:25:35.034 --rc geninfo_unexecuted_blocks=1 00:25:35.034 00:25:35.034 ' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:35.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.034 --rc genhtml_branch_coverage=1 00:25:35.034 --rc genhtml_function_coverage=1 00:25:35.034 --rc genhtml_legend=1 00:25:35.034 --rc geninfo_all_blocks=1 00:25:35.034 --rc geninfo_unexecuted_blocks=1 00:25:35.034 00:25:35.034 ' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:35.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.034 --rc genhtml_branch_coverage=1 00:25:35.034 --rc genhtml_function_coverage=1 00:25:35.034 --rc genhtml_legend=1 00:25:35.034 --rc geninfo_all_blocks=1 00:25:35.034 --rc geninfo_unexecuted_blocks=1 00:25:35.034 00:25:35.034 ' 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.034 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.035 ************************************ 00:25:35.035 START TEST nvmf_multicontroller 00:25:35.035 ************************************ 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:35.035 * Looking for test storage... 00:25:35.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.035 --rc genhtml_branch_coverage=1 00:25:35.035 --rc genhtml_function_coverage=1 00:25:35.035 --rc genhtml_legend=1 00:25:35.035 --rc geninfo_all_blocks=1 00:25:35.035 --rc geninfo_unexecuted_blocks=1 00:25:35.035 00:25:35.035 ' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.035 --rc genhtml_branch_coverage=1 00:25:35.035 --rc genhtml_function_coverage=1 00:25:35.035 --rc genhtml_legend=1 00:25:35.035 --rc geninfo_all_blocks=1 00:25:35.035 --rc geninfo_unexecuted_blocks=1 00:25:35.035 00:25:35.035 ' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.035 --rc genhtml_branch_coverage=1 00:25:35.035 --rc genhtml_function_coverage=1 00:25:35.035 --rc genhtml_legend=1 00:25:35.035 --rc geninfo_all_blocks=1 00:25:35.035 --rc geninfo_unexecuted_blocks=1 00:25:35.035 00:25:35.035 ' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.035 --rc genhtml_branch_coverage=1 00:25:35.035 --rc genhtml_function_coverage=1 00:25:35.035 --rc genhtml_legend=1 00:25:35.035 --rc geninfo_all_blocks=1 00:25:35.035 --rc geninfo_unexecuted_blocks=1 00:25:35.035 00:25:35.035 ' 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.035 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:35.036 Cannot find device "nvmf_init_br" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:35.036 Cannot find device "nvmf_init_br2" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:35.036 Cannot find device "nvmf_tgt_br" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:35.036 Cannot find device "nvmf_tgt_br2" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:35.036 Cannot find device "nvmf_init_br" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:35.036 Cannot find device "nvmf_init_br2" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:35.036 Cannot find device "nvmf_tgt_br" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:35.036 Cannot find device "nvmf_tgt_br2" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:35.036 Cannot find device "nvmf_br" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:35.036 Cannot find device "nvmf_init_if" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:35.036 Cannot find device "nvmf_init_if2" 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:35.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.036 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:35.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:35.037 18:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:35.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:35.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:25:35.037 00:25:35.037 --- 10.0.0.3 ping statistics --- 00:25:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.037 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:35.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:35.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:25:35.037 00:25:35.037 --- 10.0.0.4 ping statistics --- 00:25:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.037 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:35.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:25:35.037 00:25:35.037 --- 10.0.0.1 ping statistics --- 00:25:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.037 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:35.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:35.037 00:25:35.037 --- 10.0.0.2 ping statistics --- 00:25:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.037 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@457 -- # return 0 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=105821 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 105821 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 105821 ']' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 [2024-12-14 18:45:50.139868] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:35.037 [2024-12-14 18:45:50.139955] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.037 [2024-12-14 18:45:50.277688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:35.037 [2024-12-14 18:45:50.357435] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.037 [2024-12-14 18:45:50.357507] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.037 [2024-12-14 18:45:50.357519] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.037 [2024-12-14 18:45:50.357527] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.037 [2024-12-14 18:45:50.357534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.037 [2024-12-14 18:45:50.358523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.037 [2024-12-14 18:45:50.358742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.037 [2024-12-14 18:45:50.358754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.037 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 [2024-12-14 18:45:50.564844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 Malloc0 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 [2024-12-14 18:45:50.638990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 [2024-12-14 18:45:50.646793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 Malloc1 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=105854 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 105854 /var/tmp/bdevperf.sock 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 105854 ']' 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.038 18:45:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.416 NVMe0n1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.416 1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.416 2024/12/14 18:45:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:36.416 request: 00:25:36.416 { 00:25:36.416 "method": "bdev_nvme_attach_controller", 00:25:36.416 "params": { 00:25:36.416 "name": "NVMe0", 00:25:36.416 "trtype": "tcp", 00:25:36.416 "traddr": "10.0.0.3", 00:25:36.416 "adrfam": "ipv4", 00:25:36.416 "trsvcid": "4420", 00:25:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.416 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:36.416 "hostaddr": "10.0.0.1", 00:25:36.416 "prchk_reftag": false, 00:25:36.416 "prchk_guard": false, 00:25:36.416 "hdgst": false, 00:25:36.416 "ddgst": false, 00:25:36.416 "allow_unrecognized_csi": false 00:25:36.416 } 00:25:36.416 } 00:25:36.416 Got JSON-RPC error response 00:25:36.416 GoRPCClient: error on JSON-RPC call 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.416 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.416 2024/12/14 18:45:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:36.416 request: 00:25:36.416 { 00:25:36.416 "method": "bdev_nvme_attach_controller", 00:25:36.416 "params": { 00:25:36.417 "name": "NVMe0", 00:25:36.417 "trtype": "tcp", 00:25:36.417 "traddr": "10.0.0.3", 00:25:36.417 "adrfam": "ipv4", 00:25:36.417 "trsvcid": "4420", 00:25:36.417 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.417 "hostaddr": "10.0.0.1", 00:25:36.417 "prchk_reftag": false, 00:25:36.417 "prchk_guard": false, 00:25:36.417 "hdgst": false, 00:25:36.417 "ddgst": false, 00:25:36.417 "allow_unrecognized_csi": false 00:25:36.417 } 00:25:36.417 } 00:25:36.417 Got JSON-RPC error response 00:25:36.417 GoRPCClient: error on JSON-RPC call 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.417 2024/12/14 18:45:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:25:36.417 request: 00:25:36.417 { 00:25:36.417 "method": "bdev_nvme_attach_controller", 00:25:36.417 "params": { 00:25:36.417 "name": "NVMe0", 00:25:36.417 "trtype": "tcp", 00:25:36.417 "traddr": "10.0.0.3", 00:25:36.417 "adrfam": "ipv4", 00:25:36.417 "trsvcid": "4420", 00:25:36.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.417 "hostaddr": "10.0.0.1", 00:25:36.417 "prchk_reftag": false, 00:25:36.417 "prchk_guard": false, 00:25:36.417 "hdgst": false, 00:25:36.417 "ddgst": false, 00:25:36.417 "multipath": "disable", 00:25:36.417 "allow_unrecognized_csi": false 00:25:36.417 } 00:25:36.417 } 00:25:36.417 Got JSON-RPC error response 00:25:36.417 GoRPCClient: error on JSON-RPC call 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.417 2024/12/14 18:45:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:36.417 request: 00:25:36.417 { 00:25:36.417 "method": "bdev_nvme_attach_controller", 00:25:36.417 "params": { 00:25:36.417 "name": "NVMe0", 00:25:36.417 "trtype": "tcp", 00:25:36.417 "traddr": "10.0.0.3", 00:25:36.417 "adrfam": "ipv4", 00:25:36.417 "trsvcid": "4420", 00:25:36.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.417 "hostaddr": "10.0.0.1", 00:25:36.417 "prchk_reftag": false, 00:25:36.417 "prchk_guard": false, 00:25:36.417 "hdgst": false, 00:25:36.417 "ddgst": false, 00:25:36.417 "multipath": "failover", 00:25:36.417 "allow_unrecognized_csi": false 00:25:36.417 } 00:25:36.417 } 00:25:36.417 Got JSON-RPC error response 00:25:36.417 GoRPCClient: error on JSON-RPC call 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.417 18:45:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.417 00:25:36.417 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.417 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.417 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.418 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:36.418 18:45:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.795 { 00:25:37.795 "results": [ 00:25:37.795 { 00:25:37.795 "job": "NVMe0n1", 00:25:37.795 "core_mask": "0x1", 00:25:37.795 "workload": "write", 00:25:37.795 "status": "finished", 00:25:37.795 "queue_depth": 128, 00:25:37.795 "io_size": 4096, 00:25:37.795 "runtime": 1.005925, 00:25:37.795 "iops": 17540.075055297362, 00:25:37.795 "mibps": 68.51591818475532, 00:25:37.795 "io_failed": 0, 00:25:37.795 "io_timeout": 0, 00:25:37.795 "avg_latency_us": 7285.94540096041, 00:25:37.795 "min_latency_us": 2308.6545454545453, 00:25:37.795 "max_latency_us": 12690.152727272727 00:25:37.795 } 00:25:37.795 ], 00:25:37.795 "core_count": 1 00:25:37.795 } 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.795 nvme1n1 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.795 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.054 nvme1n1 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 105854 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 105854 ']' 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 105854 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105854 00:25:38.054 killing process with pid 105854 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:38.054 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:38.055 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105854' 00:25:38.055 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 105854 00:25:38.055 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 105854 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:25:38.314 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:25:38.314 [2024-12-14 18:45:50.796106] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:38.314 [2024-12-14 18:45:50.796215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105854 ] 00:25:38.314 [2024-12-14 18:45:50.941052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.314 [2024-12-14 18:45:51.039859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.314 [2024-12-14 18:45:52.134403] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 28467bf2-312a-4bf4-89ae-f2ac1fa1d6bf already exists 00:25:38.314 [2024-12-14 18:45:52.134924] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:28467bf2-312a-4bf4-89ae-f2ac1fa1d6bf alias for bdev NVMe1n1 00:25:38.314 [2024-12-14 18:45:52.135042] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:38.314 Running I/O for 1 seconds... 00:25:38.314 17516.00 IOPS, 68.42 MiB/s 00:25:38.314 Latency(us) 00:25:38.314 [2024-12-14T18:45:54.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.314 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:38.314 NVMe0n1 : 1.01 17540.08 68.52 0.00 0.00 7285.95 2308.65 12690.15 00:25:38.314 [2024-12-14T18:45:54.067Z] =================================================================================================================== 00:25:38.314 [2024-12-14T18:45:54.067Z] Total : 17540.08 68.52 0.00 0.00 7285.95 2308.65 12690.15 00:25:38.314 Received shutdown signal, test time was about 1.000000 seconds 00:25:38.314 00:25:38.314 Latency(us) 00:25:38.314 [2024-12-14T18:45:54.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.314 [2024-12-14T18:45:54.067Z] =================================================================================================================== 00:25:38.314 [2024-12-14T18:45:54.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.314 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.314 18:45:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.314 rmmod nvme_tcp 00:25:38.314 rmmod nvme_fabrics 00:25:38.314 rmmod nvme_keyring 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 105821 ']' 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 105821 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 105821 ']' 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 105821 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105821 00:25:38.314 killing process with pid 105821 00:25:38.314 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.573 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.573 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105821' 00:25:38.573 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 105821 00:25:38.573 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 105821 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:38.831 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:38.832 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:38.832 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:38.832 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:25:39.090 00:25:39.090 real 0m5.243s 00:25:39.090 user 0m15.740s 00:25:39.090 sys 0m1.329s 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.090 ************************************ 00:25:39.090 END TEST nvmf_multicontroller 00:25:39.090 ************************************ 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.090 ************************************ 00:25:39.090 START TEST nvmf_aer 00:25:39.090 ************************************ 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:39.090 * Looking for test storage... 00:25:39.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:25:39.090 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.350 --rc genhtml_branch_coverage=1 00:25:39.350 --rc genhtml_function_coverage=1 00:25:39.350 --rc genhtml_legend=1 00:25:39.350 --rc geninfo_all_blocks=1 00:25:39.350 --rc geninfo_unexecuted_blocks=1 00:25:39.350 00:25:39.350 ' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.350 --rc genhtml_branch_coverage=1 00:25:39.350 --rc genhtml_function_coverage=1 00:25:39.350 --rc genhtml_legend=1 00:25:39.350 --rc geninfo_all_blocks=1 00:25:39.350 --rc geninfo_unexecuted_blocks=1 00:25:39.350 00:25:39.350 ' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.350 --rc genhtml_branch_coverage=1 00:25:39.350 --rc genhtml_function_coverage=1 00:25:39.350 --rc genhtml_legend=1 00:25:39.350 --rc geninfo_all_blocks=1 00:25:39.350 --rc geninfo_unexecuted_blocks=1 00:25:39.350 00:25:39.350 ' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.350 --rc genhtml_branch_coverage=1 00:25:39.350 --rc genhtml_function_coverage=1 00:25:39.350 --rc genhtml_legend=1 00:25:39.350 --rc geninfo_all_blocks=1 00:25:39.350 --rc geninfo_unexecuted_blocks=1 00:25:39.350 00:25:39.350 ' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.350 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:39.351 Cannot find device "nvmf_init_br" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:39.351 Cannot find device "nvmf_init_br2" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:39.351 Cannot find device "nvmf_tgt_br" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.351 Cannot find device "nvmf_tgt_br2" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:39.351 Cannot find device "nvmf_init_br" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:39.351 Cannot find device "nvmf_init_br2" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:39.351 Cannot find device "nvmf_tgt_br" 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:25:39.351 18:45:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:39.351 Cannot find device "nvmf_tgt_br2" 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:39.351 Cannot find device "nvmf_br" 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:39.351 Cannot find device "nvmf_init_if" 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:39.351 Cannot find device "nvmf_init_if2" 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:39.351 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:39.609 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:39.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:39.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:25:39.610 00:25:39.610 --- 10.0.0.3 ping statistics --- 00:25:39.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.610 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:39.610 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:39.610 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:25:39.610 00:25:39.610 --- 10.0.0.4 ping statistics --- 00:25:39.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.610 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:39.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:25:39.610 00:25:39.610 --- 10.0.0.1 ping statistics --- 00:25:39.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.610 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:39.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:25:39.610 00:25:39.610 --- 10.0.0.2 ping statistics --- 00:25:39.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.610 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # return 0 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=106170 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 106170 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 106170 ']' 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.610 18:45:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:39.868 [2024-12-14 18:45:55.366468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:39.868 [2024-12-14 18:45:55.366586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.868 [2024-12-14 18:45:55.510966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.868 [2024-12-14 18:45:55.614293] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.868 [2024-12-14 18:45:55.614376] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.868 [2024-12-14 18:45:55.614399] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.868 [2024-12-14 18:45:55.614415] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.868 [2024-12-14 18:45:55.614430] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.868 [2024-12-14 18:45:55.614592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.868 [2024-12-14 18:45:55.614719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.868 [2024-12-14 18:45:55.615239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.868 [2024-12-14 18:45:55.615255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 [2024-12-14 18:45:56.403511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 Malloc0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 [2024-12-14 18:45:56.459548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.805 [ 00:25:40.805 { 00:25:40.805 "allow_any_host": true, 00:25:40.805 "hosts": [], 00:25:40.805 "listen_addresses": [], 00:25:40.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:40.805 "subtype": "Discovery" 00:25:40.805 }, 00:25:40.805 { 00:25:40.805 "allow_any_host": true, 00:25:40.805 "hosts": [], 00:25:40.805 "listen_addresses": [ 00:25:40.805 { 00:25:40.805 "adrfam": "IPv4", 00:25:40.805 "traddr": "10.0.0.3", 00:25:40.805 "trsvcid": "4420", 00:25:40.805 "trtype": "TCP" 00:25:40.805 } 00:25:40.805 ], 00:25:40.805 "max_cntlid": 65519, 00:25:40.805 "max_namespaces": 2, 00:25:40.805 "min_cntlid": 1, 00:25:40.805 "model_number": "SPDK bdev Controller", 00:25:40.805 "namespaces": [ 00:25:40.805 { 00:25:40.805 "bdev_name": "Malloc0", 00:25:40.805 "name": "Malloc0", 00:25:40.805 "nguid": "9E1A0DBB19FB4918AFEB7FB16116A0BD", 00:25:40.805 "nsid": 1, 00:25:40.805 "uuid": "9e1a0dbb-19fb-4918-afeb-7fb16116a0bd" 00:25:40.805 } 00:25:40.805 ], 00:25:40.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.805 "serial_number": "SPDK00000000000001", 00:25:40.805 "subtype": "NVMe" 00:25:40.805 } 00:25:40.805 ] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=106224 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:40.805 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.064 Malloc1 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:41.064 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.065 Asynchronous Event Request test 00:25:41.065 Attaching to 10.0.0.3 00:25:41.065 Attached to 10.0.0.3 00:25:41.065 Registering asynchronous event callbacks... 00:25:41.065 Starting namespace attribute notice tests for all controllers... 00:25:41.065 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:41.065 aer_cb - Changed Namespace 00:25:41.065 Cleaning up... 00:25:41.065 [ 00:25:41.065 { 00:25:41.065 "allow_any_host": true, 00:25:41.065 "hosts": [], 00:25:41.065 "listen_addresses": [], 00:25:41.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.065 "subtype": "Discovery" 00:25:41.065 }, 00:25:41.065 { 00:25:41.065 "allow_any_host": true, 00:25:41.065 "hosts": [], 00:25:41.065 "listen_addresses": [ 00:25:41.065 { 00:25:41.065 "adrfam": "IPv4", 00:25:41.065 "traddr": "10.0.0.3", 00:25:41.065 "trsvcid": "4420", 00:25:41.065 "trtype": "TCP" 00:25:41.065 } 00:25:41.065 ], 00:25:41.065 "max_cntlid": 65519, 00:25:41.065 "max_namespaces": 2, 00:25:41.065 "min_cntlid": 1, 00:25:41.065 "model_number": "SPDK bdev Controller", 00:25:41.065 "namespaces": [ 00:25:41.065 { 00:25:41.065 "bdev_name": "Malloc0", 00:25:41.065 "name": "Malloc0", 00:25:41.065 "nguid": "9E1A0DBB19FB4918AFEB7FB16116A0BD", 00:25:41.065 "nsid": 1, 00:25:41.065 "uuid": "9e1a0dbb-19fb-4918-afeb-7fb16116a0bd" 00:25:41.065 }, 00:25:41.065 { 00:25:41.065 "bdev_name": "Malloc1", 00:25:41.065 "name": "Malloc1", 00:25:41.065 "nguid": "10919C394D3D484899376A1B87903159", 00:25:41.065 "nsid": 2, 00:25:41.065 "uuid": "10919c39-4d3d-4848-9937-6a1b87903159" 00:25:41.065 } 00:25:41.065 ], 00:25:41.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.065 "serial_number": "SPDK00000000000001", 00:25:41.065 "subtype": "NVMe" 00:25:41.065 } 00:25:41.065 ] 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 106224 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.065 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.324 rmmod nvme_tcp 00:25:41.324 rmmod nvme_fabrics 00:25:41.324 rmmod nvme_keyring 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 106170 ']' 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 106170 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 106170 ']' 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 106170 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:41.324 18:45:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106170 00:25:41.324 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:41.324 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:41.324 killing process with pid 106170 00:25:41.324 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106170' 00:25:41.324 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 106170 00:25:41.324 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 106170 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:41.584 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:25:41.843 00:25:41.843 real 0m2.830s 00:25:41.843 user 0m6.924s 00:25:41.843 sys 0m0.865s 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.843 ************************************ 00:25:41.843 END TEST nvmf_aer 00:25:41.843 ************************************ 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.843 ************************************ 00:25:41.843 START TEST nvmf_async_init 00:25:41.843 ************************************ 00:25:41.843 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:42.105 * Looking for test storage... 00:25:42.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.105 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:42.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.105 --rc genhtml_branch_coverage=1 00:25:42.105 --rc genhtml_function_coverage=1 00:25:42.105 --rc genhtml_legend=1 00:25:42.105 --rc geninfo_all_blocks=1 00:25:42.105 --rc geninfo_unexecuted_blocks=1 00:25:42.105 00:25:42.105 ' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.106 --rc genhtml_branch_coverage=1 00:25:42.106 --rc genhtml_function_coverage=1 00:25:42.106 --rc genhtml_legend=1 00:25:42.106 --rc geninfo_all_blocks=1 00:25:42.106 --rc geninfo_unexecuted_blocks=1 00:25:42.106 00:25:42.106 ' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.106 --rc genhtml_branch_coverage=1 00:25:42.106 --rc genhtml_function_coverage=1 00:25:42.106 --rc genhtml_legend=1 00:25:42.106 --rc geninfo_all_blocks=1 00:25:42.106 --rc geninfo_unexecuted_blocks=1 00:25:42.106 00:25:42.106 ' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.106 --rc genhtml_branch_coverage=1 00:25:42.106 --rc genhtml_function_coverage=1 00:25:42.106 --rc genhtml_legend=1 00:25:42.106 --rc geninfo_all_blocks=1 00:25:42.106 --rc geninfo_unexecuted_blocks=1 00:25:42.106 00:25:42.106 ' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cf4eac0cfa424f888e477d498c970876 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:42.106 Cannot find device "nvmf_init_br" 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:42.106 Cannot find device "nvmf_init_br2" 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:42.106 Cannot find device "nvmf_tgt_br" 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:42.106 Cannot find device "nvmf_tgt_br2" 00:25:42.106 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:25:42.107 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:42.374 Cannot find device "nvmf_init_br" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:42.374 Cannot find device "nvmf_init_br2" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:42.374 Cannot find device "nvmf_tgt_br" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:42.374 Cannot find device "nvmf_tgt_br2" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:42.374 Cannot find device "nvmf_br" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:42.374 Cannot find device "nvmf_init_if" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:42.374 Cannot find device "nvmf_init_if2" 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:42.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:42.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:42.374 18:45:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:42.374 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:42.375 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:42.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:42.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:25:42.633 00:25:42.633 --- 10.0.0.3 ping statistics --- 00:25:42.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.633 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:42.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:42.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:25:42.633 00:25:42.633 --- 10.0.0.4 ping statistics --- 00:25:42.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.633 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:42.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:42.633 00:25:42.633 --- 10.0.0.1 ping statistics --- 00:25:42.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.633 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:42.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:25:42.633 00:25:42.633 --- 10.0.0.2 ping statistics --- 00:25:42.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.633 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # return 0 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:42.633 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=106443 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 106443 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 106443 ']' 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.634 18:45:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.634 [2024-12-14 18:45:58.284388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:42.634 [2024-12-14 18:45:58.284502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.892 [2024-12-14 18:45:58.423669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.892 [2024-12-14 18:45:58.516923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.892 [2024-12-14 18:45:58.516987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.892 [2024-12-14 18:45:58.517000] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.892 [2024-12-14 18:45:58.517008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.892 [2024-12-14 18:45:58.517016] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.892 [2024-12-14 18:45:58.517051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 [2024-12-14 18:45:59.362148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 null0 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cf4eac0cfa424f888e477d498c970876 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.828 [2024-12-14 18:45:59.402334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.828 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.086 nvme0n1 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.086 [ 00:25:44.086 { 00:25:44.086 "aliases": [ 00:25:44.086 "cf4eac0c-fa42-4f88-8e47-7d498c970876" 00:25:44.086 ], 00:25:44.086 "assigned_rate_limits": { 00:25:44.086 "r_mbytes_per_sec": 0, 00:25:44.086 "rw_ios_per_sec": 0, 00:25:44.086 "rw_mbytes_per_sec": 0, 00:25:44.086 "w_mbytes_per_sec": 0 00:25:44.086 }, 00:25:44.086 "block_size": 512, 00:25:44.086 "claimed": false, 00:25:44.086 "driver_specific": { 00:25:44.086 "mp_policy": "active_passive", 00:25:44.086 "nvme": [ 00:25:44.086 { 00:25:44.086 "ctrlr_data": { 00:25:44.086 "ana_reporting": false, 00:25:44.086 "cntlid": 1, 00:25:44.086 "firmware_revision": "24.09.1", 00:25:44.086 "model_number": "SPDK bdev Controller", 00:25:44.086 "multi_ctrlr": true, 00:25:44.086 "oacs": { 00:25:44.086 "firmware": 0, 00:25:44.086 "format": 0, 00:25:44.086 "ns_manage": 0, 00:25:44.086 "security": 0 00:25:44.086 }, 00:25:44.086 "serial_number": "00000000000000000000", 00:25:44.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.086 "vendor_id": "0x8086" 00:25:44.086 }, 00:25:44.086 "ns_data": { 00:25:44.086 "can_share": true, 00:25:44.086 "id": 1 00:25:44.086 }, 00:25:44.086 "trid": { 00:25:44.086 "adrfam": "IPv4", 00:25:44.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.086 "traddr": "10.0.0.3", 00:25:44.086 "trsvcid": "4420", 00:25:44.086 "trtype": "TCP" 00:25:44.086 }, 00:25:44.086 "vs": { 00:25:44.086 "nvme_version": "1.3" 00:25:44.086 } 00:25:44.086 } 00:25:44.086 ] 00:25:44.086 }, 00:25:44.086 "memory_domains": [ 00:25:44.086 { 00:25:44.086 "dma_device_id": "system", 00:25:44.086 "dma_device_type": 1 00:25:44.086 } 00:25:44.086 ], 00:25:44.086 "name": "nvme0n1", 00:25:44.086 "num_blocks": 2097152, 00:25:44.086 "numa_id": -1, 00:25:44.086 "product_name": "NVMe disk", 00:25:44.086 "supported_io_types": { 00:25:44.086 "abort": true, 00:25:44.086 "compare": true, 00:25:44.086 "compare_and_write": true, 00:25:44.086 "copy": true, 00:25:44.086 "flush": true, 00:25:44.086 "get_zone_info": false, 00:25:44.086 "nvme_admin": true, 00:25:44.086 "nvme_io": true, 00:25:44.086 "nvme_io_md": false, 00:25:44.086 "nvme_iov_md": false, 00:25:44.086 "read": true, 00:25:44.086 "reset": true, 00:25:44.086 "seek_data": false, 00:25:44.086 "seek_hole": false, 00:25:44.086 "unmap": false, 00:25:44.086 "write": true, 00:25:44.086 "write_zeroes": true, 00:25:44.086 "zcopy": false, 00:25:44.086 "zone_append": false, 00:25:44.086 "zone_management": false 00:25:44.086 }, 00:25:44.086 "uuid": "cf4eac0c-fa42-4f88-8e47-7d498c970876", 00:25:44.086 "zoned": false 00:25:44.086 } 00:25:44.086 ] 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.086 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.086 [2024-12-14 18:45:59.675567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.086 [2024-12-14 18:45:59.675729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e4aa0 (9): Bad file descriptor 00:25:44.087 [2024-12-14 18:45:59.807965] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.087 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.087 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.087 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.087 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.087 [ 00:25:44.087 { 00:25:44.087 "aliases": [ 00:25:44.087 "cf4eac0c-fa42-4f88-8e47-7d498c970876" 00:25:44.087 ], 00:25:44.087 "assigned_rate_limits": { 00:25:44.087 "r_mbytes_per_sec": 0, 00:25:44.087 "rw_ios_per_sec": 0, 00:25:44.087 "rw_mbytes_per_sec": 0, 00:25:44.087 "w_mbytes_per_sec": 0 00:25:44.087 }, 00:25:44.087 "block_size": 512, 00:25:44.087 "claimed": false, 00:25:44.087 "driver_specific": { 00:25:44.087 "mp_policy": "active_passive", 00:25:44.087 "nvme": [ 00:25:44.087 { 00:25:44.087 "ctrlr_data": { 00:25:44.087 "ana_reporting": false, 00:25:44.087 "cntlid": 2, 00:25:44.087 "firmware_revision": "24.09.1", 00:25:44.087 "model_number": "SPDK bdev Controller", 00:25:44.087 "multi_ctrlr": true, 00:25:44.087 "oacs": { 00:25:44.087 "firmware": 0, 00:25:44.087 "format": 0, 00:25:44.087 "ns_manage": 0, 00:25:44.087 "security": 0 00:25:44.087 }, 00:25:44.087 "serial_number": "00000000000000000000", 00:25:44.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.087 "vendor_id": "0x8086" 00:25:44.087 }, 00:25:44.087 "ns_data": { 00:25:44.087 "can_share": true, 00:25:44.087 "id": 1 00:25:44.346 }, 00:25:44.346 "trid": { 00:25:44.346 "adrfam": "IPv4", 00:25:44.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.346 "traddr": "10.0.0.3", 00:25:44.346 "trsvcid": "4420", 00:25:44.346 "trtype": "TCP" 00:25:44.346 }, 00:25:44.346 "vs": { 00:25:44.346 "nvme_version": "1.3" 00:25:44.346 } 00:25:44.346 } 00:25:44.346 ] 00:25:44.346 }, 00:25:44.346 "memory_domains": [ 00:25:44.346 { 00:25:44.346 "dma_device_id": "system", 00:25:44.346 "dma_device_type": 1 00:25:44.346 } 00:25:44.346 ], 00:25:44.346 "name": "nvme0n1", 00:25:44.346 "num_blocks": 2097152, 00:25:44.346 "numa_id": -1, 00:25:44.346 "product_name": "NVMe disk", 00:25:44.346 "supported_io_types": { 00:25:44.346 "abort": true, 00:25:44.346 "compare": true, 00:25:44.346 "compare_and_write": true, 00:25:44.346 "copy": true, 00:25:44.346 "flush": true, 00:25:44.346 "get_zone_info": false, 00:25:44.346 "nvme_admin": true, 00:25:44.346 "nvme_io": true, 00:25:44.346 "nvme_io_md": false, 00:25:44.346 "nvme_iov_md": false, 00:25:44.346 "read": true, 00:25:44.346 "reset": true, 00:25:44.346 "seek_data": false, 00:25:44.346 "seek_hole": false, 00:25:44.346 "unmap": false, 00:25:44.346 "write": true, 00:25:44.346 "write_zeroes": true, 00:25:44.346 "zcopy": false, 00:25:44.346 "zone_append": false, 00:25:44.346 "zone_management": false 00:25:44.346 }, 00:25:44.346 "uuid": "cf4eac0c-fa42-4f88-8e47-7d498c970876", 00:25:44.346 "zoned": false 00:25:44.346 } 00:25:44.346 ] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4QwC1HORdq 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4QwC1HORdq 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4QwC1HORdq 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 [2024-12-14 18:45:59.895778] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.346 [2024-12-14 18:45:59.895950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.346 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 [2024-12-14 18:45:59.911779] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:44.347 nvme0n1 00:25:44.347 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.347 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.347 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.347 18:45:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 [ 00:25:44.347 { 00:25:44.347 "aliases": [ 00:25:44.347 "cf4eac0c-fa42-4f88-8e47-7d498c970876" 00:25:44.347 ], 00:25:44.347 "assigned_rate_limits": { 00:25:44.347 "r_mbytes_per_sec": 0, 00:25:44.347 "rw_ios_per_sec": 0, 00:25:44.347 "rw_mbytes_per_sec": 0, 00:25:44.347 "w_mbytes_per_sec": 0 00:25:44.347 }, 00:25:44.347 "block_size": 512, 00:25:44.347 "claimed": false, 00:25:44.347 "driver_specific": { 00:25:44.347 "mp_policy": "active_passive", 00:25:44.347 "nvme": [ 00:25:44.347 { 00:25:44.347 "ctrlr_data": { 00:25:44.347 "ana_reporting": false, 00:25:44.347 "cntlid": 3, 00:25:44.347 "firmware_revision": "24.09.1", 00:25:44.347 "model_number": "SPDK bdev Controller", 00:25:44.347 "multi_ctrlr": true, 00:25:44.347 "oacs": { 00:25:44.347 "firmware": 0, 00:25:44.347 "format": 0, 00:25:44.347 "ns_manage": 0, 00:25:44.347 "security": 0 00:25:44.347 }, 00:25:44.347 "serial_number": "00000000000000000000", 00:25:44.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.347 "vendor_id": "0x8086" 00:25:44.347 }, 00:25:44.347 "ns_data": { 00:25:44.347 "can_share": true, 00:25:44.347 "id": 1 00:25:44.347 }, 00:25:44.347 "trid": { 00:25:44.347 "adrfam": "IPv4", 00:25:44.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.347 "traddr": "10.0.0.3", 00:25:44.347 "trsvcid": "4421", 00:25:44.347 "trtype": "TCP" 00:25:44.347 }, 00:25:44.347 "vs": { 00:25:44.347 "nvme_version": "1.3" 00:25:44.347 } 00:25:44.347 } 00:25:44.347 ] 00:25:44.347 }, 00:25:44.347 "memory_domains": [ 00:25:44.347 { 00:25:44.347 "dma_device_id": "system", 00:25:44.347 "dma_device_type": 1 00:25:44.347 } 00:25:44.347 ], 00:25:44.347 "name": "nvme0n1", 00:25:44.347 "num_blocks": 2097152, 00:25:44.347 "numa_id": -1, 00:25:44.347 "product_name": "NVMe disk", 00:25:44.347 "supported_io_types": { 00:25:44.347 "abort": true, 00:25:44.347 "compare": true, 00:25:44.347 "compare_and_write": true, 00:25:44.347 "copy": true, 00:25:44.347 "flush": true, 00:25:44.347 "get_zone_info": false, 00:25:44.347 "nvme_admin": true, 00:25:44.347 "nvme_io": true, 00:25:44.347 "nvme_io_md": false, 00:25:44.347 "nvme_iov_md": false, 00:25:44.347 "read": true, 00:25:44.347 "reset": true, 00:25:44.347 "seek_data": false, 00:25:44.347 "seek_hole": false, 00:25:44.347 "unmap": false, 00:25:44.347 "write": true, 00:25:44.347 "write_zeroes": true, 00:25:44.347 "zcopy": false, 00:25:44.347 "zone_append": false, 00:25:44.347 "zone_management": false 00:25:44.347 }, 00:25:44.347 "uuid": "cf4eac0c-fa42-4f88-8e47-7d498c970876", 00:25:44.347 "zoned": false 00:25:44.347 } 00:25:44.347 ] 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4QwC1HORdq 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.347 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.606 rmmod nvme_tcp 00:25:44.606 rmmod nvme_fabrics 00:25:44.606 rmmod nvme_keyring 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 106443 ']' 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 106443 00:25:44.606 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 106443 ']' 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 106443 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106443 00:25:44.607 killing process with pid 106443 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106443' 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 106443 00:25:44.607 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 106443 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:44.866 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:25:45.124 00:25:45.124 real 0m3.113s 00:25:45.124 user 0m2.727s 00:25:45.124 sys 0m0.783s 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:45.124 ************************************ 00:25:45.124 END TEST nvmf_async_init 00:25:45.124 ************************************ 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.124 ************************************ 00:25:45.124 START TEST dma 00:25:45.124 ************************************ 00:25:45.124 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:45.124 * Looking for test storage... 00:25:45.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:45.125 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:45.125 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:25:45.125 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:45.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.384 --rc genhtml_branch_coverage=1 00:25:45.384 --rc genhtml_function_coverage=1 00:25:45.384 --rc genhtml_legend=1 00:25:45.384 --rc geninfo_all_blocks=1 00:25:45.384 --rc geninfo_unexecuted_blocks=1 00:25:45.384 00:25:45.384 ' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:45.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.384 --rc genhtml_branch_coverage=1 00:25:45.384 --rc genhtml_function_coverage=1 00:25:45.384 --rc genhtml_legend=1 00:25:45.384 --rc geninfo_all_blocks=1 00:25:45.384 --rc geninfo_unexecuted_blocks=1 00:25:45.384 00:25:45.384 ' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:45.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.384 --rc genhtml_branch_coverage=1 00:25:45.384 --rc genhtml_function_coverage=1 00:25:45.384 --rc genhtml_legend=1 00:25:45.384 --rc geninfo_all_blocks=1 00:25:45.384 --rc geninfo_unexecuted_blocks=1 00:25:45.384 00:25:45.384 ' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:45.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.384 --rc genhtml_branch_coverage=1 00:25:45.384 --rc genhtml_function_coverage=1 00:25:45.384 --rc genhtml_legend=1 00:25:45.384 --rc geninfo_all_blocks=1 00:25:45.384 --rc geninfo_unexecuted_blocks=1 00:25:45.384 00:25:45.384 ' 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.384 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:45.385 00:25:45.385 real 0m0.217s 00:25:45.385 user 0m0.143s 00:25:45.385 sys 0m0.086s 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:45.385 ************************************ 00:25:45.385 END TEST dma 00:25:45.385 ************************************ 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:45.385 18:46:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.385 ************************************ 00:25:45.385 START TEST nvmf_identify 00:25:45.385 ************************************ 00:25:45.385 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:45.385 * Looking for test storage... 00:25:45.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:45.385 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:45.385 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:25:45.385 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:45.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.643 --rc genhtml_branch_coverage=1 00:25:45.643 --rc genhtml_function_coverage=1 00:25:45.643 --rc genhtml_legend=1 00:25:45.643 --rc geninfo_all_blocks=1 00:25:45.643 --rc geninfo_unexecuted_blocks=1 00:25:45.643 00:25:45.643 ' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:45.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.643 --rc genhtml_branch_coverage=1 00:25:45.643 --rc genhtml_function_coverage=1 00:25:45.643 --rc genhtml_legend=1 00:25:45.643 --rc geninfo_all_blocks=1 00:25:45.643 --rc geninfo_unexecuted_blocks=1 00:25:45.643 00:25:45.643 ' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:45.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.643 --rc genhtml_branch_coverage=1 00:25:45.643 --rc genhtml_function_coverage=1 00:25:45.643 --rc genhtml_legend=1 00:25:45.643 --rc geninfo_all_blocks=1 00:25:45.643 --rc geninfo_unexecuted_blocks=1 00:25:45.643 00:25:45.643 ' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:45.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.643 --rc genhtml_branch_coverage=1 00:25:45.643 --rc genhtml_function_coverage=1 00:25:45.643 --rc genhtml_legend=1 00:25:45.643 --rc geninfo_all_blocks=1 00:25:45.643 --rc geninfo_unexecuted_blocks=1 00:25:45.643 00:25:45.643 ' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.643 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:45.644 Cannot find device "nvmf_init_br" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:45.644 Cannot find device "nvmf_init_br2" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:45.644 Cannot find device "nvmf_tgt_br" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:45.644 Cannot find device "nvmf_tgt_br2" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:45.644 Cannot find device "nvmf_init_br" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:45.644 Cannot find device "nvmf_init_br2" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:45.644 Cannot find device "nvmf_tgt_br" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:45.644 Cannot find device "nvmf_tgt_br2" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:45.644 Cannot find device "nvmf_br" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:45.644 Cannot find device "nvmf_init_if" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:45.644 Cannot find device "nvmf_init_if2" 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:45.644 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:45.902 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:45.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:45.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:25:45.903 00:25:45.903 --- 10.0.0.3 ping statistics --- 00:25:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.903 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:45.903 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:45.903 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:25:45.903 00:25:45.903 --- 10.0.0.4 ping statistics --- 00:25:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.903 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:45.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:45.903 00:25:45.903 --- 10.0.0.1 ping statistics --- 00:25:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.903 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:45.903 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:46.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:25:46.161 00:25:46.161 --- 10.0.0.2 ping statistics --- 00:25:46.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.161 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=106780 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 106780 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 106780 ']' 00:25:46.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.161 18:46:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.161 [2024-12-14 18:46:01.757529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:46.161 [2024-12-14 18:46:01.757642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.162 [2024-12-14 18:46:01.902844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.419 [2024-12-14 18:46:01.979525] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.419 [2024-12-14 18:46:01.979842] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.419 [2024-12-14 18:46:01.980002] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.419 [2024-12-14 18:46:01.980071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.419 [2024-12-14 18:46:01.980189] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.419 [2024-12-14 18:46:01.983656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.419 [2024-12-14 18:46:01.983805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.419 [2024-12-14 18:46:01.984437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.419 [2024-12-14 18:46:01.984531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.419 [2024-12-14 18:46:02.143107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.419 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.677 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:46.677 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.677 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.677 Malloc0 00:25:46.677 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 [2024-12-14 18:46:02.250386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 [ 00:25:46.678 { 00:25:46.678 "allow_any_host": true, 00:25:46.678 "hosts": [], 00:25:46.678 "listen_addresses": [ 00:25:46.678 { 00:25:46.678 "adrfam": "IPv4", 00:25:46.678 "traddr": "10.0.0.3", 00:25:46.678 "trsvcid": "4420", 00:25:46.678 "trtype": "TCP" 00:25:46.678 } 00:25:46.678 ], 00:25:46.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:46.678 "subtype": "Discovery" 00:25:46.678 }, 00:25:46.678 { 00:25:46.678 "allow_any_host": true, 00:25:46.678 "hosts": [], 00:25:46.678 "listen_addresses": [ 00:25:46.678 { 00:25:46.678 "adrfam": "IPv4", 00:25:46.678 "traddr": "10.0.0.3", 00:25:46.678 "trsvcid": "4420", 00:25:46.678 "trtype": "TCP" 00:25:46.678 } 00:25:46.678 ], 00:25:46.678 "max_cntlid": 65519, 00:25:46.678 "max_namespaces": 32, 00:25:46.678 "min_cntlid": 1, 00:25:46.678 "model_number": "SPDK bdev Controller", 00:25:46.678 "namespaces": [ 00:25:46.678 { 00:25:46.678 "bdev_name": "Malloc0", 00:25:46.678 "eui64": "ABCDEF0123456789", 00:25:46.678 "name": "Malloc0", 00:25:46.678 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:46.678 "nsid": 1, 00:25:46.678 "uuid": "abacb9aa-884a-4706-8bc5-d1a12dffb560" 00:25:46.678 } 00:25:46.678 ], 00:25:46.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.678 "serial_number": "SPDK00000000000001", 00:25:46.678 "subtype": "NVMe" 00:25:46.678 } 00:25:46.678 ] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:46.678 [2024-12-14 18:46:02.313212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:46.678 [2024-12-14 18:46:02.313468] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106814 ] 00:25:46.940 [2024-12-14 18:46:02.460453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:46.940 [2024-12-14 18:46:02.460517] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:46.940 [2024-12-14 18:46:02.460525] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:46.940 [2024-12-14 18:46:02.460538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:46.940 [2024-12-14 18:46:02.460547] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:46.940 [2024-12-14 18:46:02.460955] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:46.940 [2024-12-14 18:46:02.461021] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdf9a80 0 00:25:46.940 [2024-12-14 18:46:02.474678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:46.940 [2024-12-14 18:46:02.474705] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:46.940 [2024-12-14 18:46:02.474727] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:46.940 [2024-12-14 18:46:02.474731] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:46.940 [2024-12-14 18:46:02.474767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.474774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.474778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.940 [2024-12-14 18:46:02.474793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:46.940 [2024-12-14 18:46:02.474829] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.940 [2024-12-14 18:46:02.482650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.940 [2024-12-14 18:46:02.482672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.940 [2024-12-14 18:46:02.482693] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482698] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.940 [2024-12-14 18:46:02.482712] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:46.940 [2024-12-14 18:46:02.482720] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:46.940 [2024-12-14 18:46:02.482726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:46.940 [2024-12-14 18:46:02.482745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482755] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.940 [2024-12-14 18:46:02.482764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.940 [2024-12-14 18:46:02.482794] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.940 [2024-12-14 18:46:02.482919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.940 [2024-12-14 18:46:02.482927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.940 [2024-12-14 18:46:02.482931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.940 [2024-12-14 18:46:02.482942] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:46.940 [2024-12-14 18:46:02.482950] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:46.940 [2024-12-14 18:46:02.482958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.482967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.940 [2024-12-14 18:46:02.482975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.940 [2024-12-14 18:46:02.482999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.940 [2024-12-14 18:46:02.483060] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.940 [2024-12-14 18:46:02.483066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.940 [2024-12-14 18:46:02.483070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.940 [2024-12-14 18:46:02.483081] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:46.940 [2024-12-14 18:46:02.483090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:46.940 [2024-12-14 18:46:02.483098] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483106] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.940 [2024-12-14 18:46:02.483114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.940 [2024-12-14 18:46:02.483136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.940 [2024-12-14 18:46:02.483195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.940 [2024-12-14 18:46:02.483212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.940 [2024-12-14 18:46:02.483215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.940 [2024-12-14 18:46:02.483225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:46.940 [2024-12-14 18:46:02.483236] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.940 [2024-12-14 18:46:02.483252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.940 [2024-12-14 18:46:02.483273] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.940 [2024-12-14 18:46:02.483347] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.940 [2024-12-14 18:46:02.483354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.940 [2024-12-14 18:46:02.483358] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.940 [2024-12-14 18:46:02.483362] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.940 [2024-12-14 18:46:02.483367] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:46.940 [2024-12-14 18:46:02.483372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:46.940 [2024-12-14 18:46:02.483380] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:46.940 [2024-12-14 18:46:02.483486] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:46.941 [2024-12-14 18:46:02.483502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:46.941 [2024-12-14 18:46:02.483512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.483528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.941 [2024-12-14 18:46:02.483549] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.941 [2024-12-14 18:46:02.483647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.483656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.941 [2024-12-14 18:46:02.483661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.941 [2024-12-14 18:46:02.483671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:46.941 [2024-12-14 18:46:02.483682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.483698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.941 [2024-12-14 18:46:02.483721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.941 [2024-12-14 18:46:02.483785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.483792] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.941 [2024-12-14 18:46:02.483796] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.941 [2024-12-14 18:46:02.483805] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:46.941 [2024-12-14 18:46:02.483811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.483819] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:46.941 [2024-12-14 18:46:02.483835] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.483846] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.483850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.483858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.941 [2024-12-14 18:46:02.483880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.941 [2024-12-14 18:46:02.484001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.941 [2024-12-14 18:46:02.484009] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.941 [2024-12-14 18:46:02.484012] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484016] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdf9a80): datao=0, datal=4096, cccid=0 00:25:46.941 [2024-12-14 18:46:02.484022] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe3fcc0) on tqpair(0xdf9a80): expected_datao=0, payload_size=4096 00:25:46.941 [2024-12-14 18:46:02.484027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484035] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484039] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.484054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.941 [2024-12-14 18:46:02.484057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.941 [2024-12-14 18:46:02.484070] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:46.941 [2024-12-14 18:46:02.484075] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:46.941 [2024-12-14 18:46:02.484080] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:46.941 [2024-12-14 18:46:02.484085] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:46.941 [2024-12-14 18:46:02.484090] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:46.941 [2024-12-14 18:46:02.484095] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.484104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.484112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484120] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:46.941 [2024-12-14 18:46:02.484149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.941 [2024-12-14 18:46:02.484215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.484222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.941 [2024-12-14 18:46:02.484225] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484229] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.941 [2024-12-14 18:46:02.484237] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.941 [2024-12-14 18:46:02.484259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484263] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.941 [2024-12-14 18:46:02.484279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.941 [2024-12-14 18:46:02.484299] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484307] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.941 [2024-12-14 18:46:02.484318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.484331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:46.941 [2024-12-14 18:46:02.484339] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484343] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.941 [2024-12-14 18:46:02.484373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fcc0, cid 0, qid 0 00:25:46.941 [2024-12-14 18:46:02.484380] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3fe40, cid 1, qid 0 00:25:46.941 [2024-12-14 18:46:02.484385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe3ffc0, cid 2, qid 0 00:25:46.941 [2024-12-14 18:46:02.484390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.941 [2024-12-14 18:46:02.484395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe402c0, cid 4, qid 0 00:25:46.941 [2024-12-14 18:46:02.484498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.484505] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.941 [2024-12-14 18:46:02.484509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe402c0) on tqpair=0xdf9a80 00:25:46.941 [2024-12-14 18:46:02.484519] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:46.941 [2024-12-14 18:46:02.484524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:46.941 [2024-12-14 18:46:02.484535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdf9a80) 00:25:46.941 [2024-12-14 18:46:02.484547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.941 [2024-12-14 18:46:02.484567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe402c0, cid 4, qid 0 00:25:46.941 [2024-12-14 18:46:02.484676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.941 [2024-12-14 18:46:02.484685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.941 [2024-12-14 18:46:02.484689] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484693] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdf9a80): datao=0, datal=4096, cccid=4 00:25:46.941 [2024-12-14 18:46:02.484698] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe402c0) on tqpair(0xdf9a80): expected_datao=0, payload_size=4096 00:25:46.941 [2024-12-14 18:46:02.484703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484710] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484715] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.941 [2024-12-14 18:46:02.484725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.941 [2024-12-14 18:46:02.484731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.942 [2024-12-14 18:46:02.484735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.484739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe402c0) on tqpair=0xdf9a80 00:25:46.942 [2024-12-14 18:46:02.484753] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:46.942 [2024-12-14 18:46:02.484811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.484822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdf9a80) 00:25:46.942 [2024-12-14 18:46:02.484831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.942 [2024-12-14 18:46:02.484840] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.484844] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.484848] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdf9a80) 00:25:46.942 [2024-12-14 18:46:02.484855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.942 [2024-12-14 18:46:02.484885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe402c0, cid 4, qid 0 00:25:46.942 [2024-12-14 18:46:02.484893] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40440, cid 5, qid 0 00:25:46.942 [2024-12-14 18:46:02.485039] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.942 [2024-12-14 18:46:02.485046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.942 [2024-12-14 18:46:02.485050] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.485053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdf9a80): datao=0, datal=1024, cccid=4 00:25:46.942 [2024-12-14 18:46:02.485058] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe402c0) on tqpair(0xdf9a80): expected_datao=0, payload_size=1024 00:25:46.942 [2024-12-14 18:46:02.485063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.485070] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.485074] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.485080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.942 [2024-12-14 18:46:02.485086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.942 [2024-12-14 18:46:02.485090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.485094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40440) on tqpair=0xdf9a80 00:25:46.942 [2024-12-14 18:46:02.527709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.942 [2024-12-14 18:46:02.527730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.942 [2024-12-14 18:46:02.527751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527756] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe402c0) on tqpair=0xdf9a80 00:25:46.942 [2024-12-14 18:46:02.527771] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdf9a80) 00:25:46.942 [2024-12-14 18:46:02.527785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.942 [2024-12-14 18:46:02.527818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe402c0, cid 4, qid 0 00:25:46.942 [2024-12-14 18:46:02.527928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.942 [2024-12-14 18:46:02.527935] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.942 [2024-12-14 18:46:02.527938] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdf9a80): datao=0, datal=3072, cccid=4 00:25:46.942 [2024-12-14 18:46:02.527946] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe402c0) on tqpair(0xdf9a80): expected_datao=0, payload_size=3072 00:25:46.942 [2024-12-14 18:46:02.527951] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527958] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527962] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.527970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.942 [2024-12-14 18:46:02.527991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.942 [2024-12-14 18:46:02.527994] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.528014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe402c0) on tqpair=0xdf9a80 00:25:46.942 [2024-12-14 18:46:02.528025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.528029] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdf9a80) 00:25:46.942 [2024-12-14 18:46:02.528037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.942 [2024-12-14 18:46:02.528064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe402c0, cid 4, qid 0 00:25:46.942 [2024-12-14 18:46:02.528158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.942 [2024-12-14 18:46:02.528165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.942 [2024-12-14 18:46:02.528169] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.528173] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdf9a80): datao=0, datal=8, cccid=4 00:25:46.942 [2024-12-14 18:46:02.528177] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe402c0) on tqpair(0xdf9a80): expected_datao=0, payload_size=8 00:25:46.942 [2024-12-14 18:46:02.528182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.528189] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.528193] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.568777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.942 [2024-12-14 18:46:02.568809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.942 [2024-12-14 18:46:02.568814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.942 [2024-12-14 18:46:02.568819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe402c0) on tqpair=0xdf9a80 00:25:46.942 ===================================================== 00:25:46.942 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:46.942 ===================================================== 00:25:46.942 Controller Capabilities/Features 00:25:46.942 ================================ 00:25:46.942 Vendor ID: 0000 00:25:46.942 Subsystem Vendor ID: 0000 00:25:46.942 Serial Number: .................... 00:25:46.942 Model Number: ........................................ 00:25:46.942 Firmware Version: 24.09.1 00:25:46.942 Recommended Arb Burst: 0 00:25:46.942 IEEE OUI Identifier: 00 00 00 00:25:46.942 Multi-path I/O 00:25:46.942 May have multiple subsystem ports: No 00:25:46.942 May have multiple controllers: No 00:25:46.942 Associated with SR-IOV VF: No 00:25:46.942 Max Data Transfer Size: 131072 00:25:46.942 Max Number of Namespaces: 0 00:25:46.942 Max Number of I/O Queues: 1024 00:25:46.942 NVMe Specification Version (VS): 1.3 00:25:46.942 NVMe Specification Version (Identify): 1.3 00:25:46.942 Maximum Queue Entries: 128 00:25:46.942 Contiguous Queues Required: Yes 00:25:46.942 Arbitration Mechanisms Supported 00:25:46.942 Weighted Round Robin: Not Supported 00:25:46.942 Vendor Specific: Not Supported 00:25:46.942 Reset Timeout: 15000 ms 00:25:46.942 Doorbell Stride: 4 bytes 00:25:46.942 NVM Subsystem Reset: Not Supported 00:25:46.942 Command Sets Supported 00:25:46.942 NVM Command Set: Supported 00:25:46.942 Boot Partition: Not Supported 00:25:46.942 Memory Page Size Minimum: 4096 bytes 00:25:46.942 Memory Page Size Maximum: 4096 bytes 00:25:46.942 Persistent Memory Region: Not Supported 00:25:46.942 Optional Asynchronous Events Supported 00:25:46.942 Namespace Attribute Notices: Not Supported 00:25:46.942 Firmware Activation Notices: Not Supported 00:25:46.942 ANA Change Notices: Not Supported 00:25:46.942 PLE Aggregate Log Change Notices: Not Supported 00:25:46.942 LBA Status Info Alert Notices: Not Supported 00:25:46.942 EGE Aggregate Log Change Notices: Not Supported 00:25:46.942 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.942 Zone Descriptor Change Notices: Not Supported 00:25:46.942 Discovery Log Change Notices: Supported 00:25:46.942 Controller Attributes 00:25:46.942 128-bit Host Identifier: Not Supported 00:25:46.942 Non-Operational Permissive Mode: Not Supported 00:25:46.942 NVM Sets: Not Supported 00:25:46.942 Read Recovery Levels: Not Supported 00:25:46.942 Endurance Groups: Not Supported 00:25:46.942 Predictable Latency Mode: Not Supported 00:25:46.942 Traffic Based Keep ALive: Not Supported 00:25:46.942 Namespace Granularity: Not Supported 00:25:46.942 SQ Associations: Not Supported 00:25:46.942 UUID List: Not Supported 00:25:46.942 Multi-Domain Subsystem: Not Supported 00:25:46.942 Fixed Capacity Management: Not Supported 00:25:46.942 Variable Capacity Management: Not Supported 00:25:46.942 Delete Endurance Group: Not Supported 00:25:46.942 Delete NVM Set: Not Supported 00:25:46.942 Extended LBA Formats Supported: Not Supported 00:25:46.942 Flexible Data Placement Supported: Not Supported 00:25:46.942 00:25:46.942 Controller Memory Buffer Support 00:25:46.942 ================================ 00:25:46.942 Supported: No 00:25:46.942 00:25:46.942 Persistent Memory Region Support 00:25:46.942 ================================ 00:25:46.942 Supported: No 00:25:46.942 00:25:46.942 Admin Command Set Attributes 00:25:46.943 ============================ 00:25:46.943 Security Send/Receive: Not Supported 00:25:46.943 Format NVM: Not Supported 00:25:46.943 Firmware Activate/Download: Not Supported 00:25:46.943 Namespace Management: Not Supported 00:25:46.943 Device Self-Test: Not Supported 00:25:46.943 Directives: Not Supported 00:25:46.943 NVMe-MI: Not Supported 00:25:46.943 Virtualization Management: Not Supported 00:25:46.943 Doorbell Buffer Config: Not Supported 00:25:46.943 Get LBA Status Capability: Not Supported 00:25:46.943 Command & Feature Lockdown Capability: Not Supported 00:25:46.943 Abort Command Limit: 1 00:25:46.943 Async Event Request Limit: 4 00:25:46.943 Number of Firmware Slots: N/A 00:25:46.943 Firmware Slot 1 Read-Only: N/A 00:25:46.943 Firmware Activation Without Reset: N/A 00:25:46.943 Multiple Update Detection Support: N/A 00:25:46.943 Firmware Update Granularity: No Information Provided 00:25:46.943 Per-Namespace SMART Log: No 00:25:46.943 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.943 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:46.943 Command Effects Log Page: Not Supported 00:25:46.943 Get Log Page Extended Data: Supported 00:25:46.943 Telemetry Log Pages: Not Supported 00:25:46.943 Persistent Event Log Pages: Not Supported 00:25:46.943 Supported Log Pages Log Page: May Support 00:25:46.943 Commands Supported & Effects Log Page: Not Supported 00:25:46.943 Feature Identifiers & Effects Log Page:May Support 00:25:46.943 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.943 Data Area 4 for Telemetry Log: Not Supported 00:25:46.943 Error Log Page Entries Supported: 128 00:25:46.943 Keep Alive: Not Supported 00:25:46.943 00:25:46.943 NVM Command Set Attributes 00:25:46.943 ========================== 00:25:46.943 Submission Queue Entry Size 00:25:46.943 Max: 1 00:25:46.943 Min: 1 00:25:46.943 Completion Queue Entry Size 00:25:46.943 Max: 1 00:25:46.943 Min: 1 00:25:46.943 Number of Namespaces: 0 00:25:46.943 Compare Command: Not Supported 00:25:46.943 Write Uncorrectable Command: Not Supported 00:25:46.943 Dataset Management Command: Not Supported 00:25:46.943 Write Zeroes Command: Not Supported 00:25:46.943 Set Features Save Field: Not Supported 00:25:46.943 Reservations: Not Supported 00:25:46.943 Timestamp: Not Supported 00:25:46.943 Copy: Not Supported 00:25:46.943 Volatile Write Cache: Not Present 00:25:46.943 Atomic Write Unit (Normal): 1 00:25:46.943 Atomic Write Unit (PFail): 1 00:25:46.943 Atomic Compare & Write Unit: 1 00:25:46.943 Fused Compare & Write: Supported 00:25:46.943 Scatter-Gather List 00:25:46.943 SGL Command Set: Supported 00:25:46.943 SGL Keyed: Supported 00:25:46.943 SGL Bit Bucket Descriptor: Not Supported 00:25:46.943 SGL Metadata Pointer: Not Supported 00:25:46.943 Oversized SGL: Not Supported 00:25:46.943 SGL Metadata Address: Not Supported 00:25:46.943 SGL Offset: Supported 00:25:46.943 Transport SGL Data Block: Not Supported 00:25:46.943 Replay Protected Memory Block: Not Supported 00:25:46.943 00:25:46.943 Firmware Slot Information 00:25:46.943 ========================= 00:25:46.943 Active slot: 0 00:25:46.943 00:25:46.943 00:25:46.943 Error Log 00:25:46.943 ========= 00:25:46.943 00:25:46.943 Active Namespaces 00:25:46.943 ================= 00:25:46.943 Discovery Log Page 00:25:46.943 ================== 00:25:46.943 Generation Counter: 2 00:25:46.943 Number of Records: 2 00:25:46.943 Record Format: 0 00:25:46.943 00:25:46.943 Discovery Log Entry 0 00:25:46.943 ---------------------- 00:25:46.943 Transport Type: 3 (TCP) 00:25:46.943 Address Family: 1 (IPv4) 00:25:46.943 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:46.943 Entry Flags: 00:25:46.943 Duplicate Returned Information: 1 00:25:46.943 Explicit Persistent Connection Support for Discovery: 1 00:25:46.943 Transport Requirements: 00:25:46.943 Secure Channel: Not Required 00:25:46.943 Port ID: 0 (0x0000) 00:25:46.943 Controller ID: 65535 (0xffff) 00:25:46.943 Admin Max SQ Size: 128 00:25:46.943 Transport Service Identifier: 4420 00:25:46.943 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:46.943 Transport Address: 10.0.0.3 00:25:46.943 Discovery Log Entry 1 00:25:46.943 ---------------------- 00:25:46.943 Transport Type: 3 (TCP) 00:25:46.943 Address Family: 1 (IPv4) 00:25:46.943 Subsystem Type: 2 (NVM Subsystem) 00:25:46.943 Entry Flags: 00:25:46.943 Duplicate Returned Information: 0 00:25:46.943 Explicit Persistent Connection Support for Discovery: 0 00:25:46.943 Transport Requirements: 00:25:46.943 Secure Channel: Not Required 00:25:46.943 Port ID: 0 (0x0000) 00:25:46.943 Controller ID: 65535 (0xffff) 00:25:46.943 Admin Max SQ Size: 128 00:25:46.943 Transport Service Identifier: 4420 00:25:46.943 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:46.943 Transport Address: 10.0.0.3 [2024-12-14 18:46:02.568913] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:46.943 [2024-12-14 18:46:02.568926] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fcc0) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.568934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.943 [2024-12-14 18:46:02.568939] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3fe40) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.568944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.943 [2024-12-14 18:46:02.568949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe3ffc0) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.568953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.943 [2024-12-14 18:46:02.568958] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.568962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.943 [2024-12-14 18:46:02.568972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.568976] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.568980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.943 [2024-12-14 18:46:02.569004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.943 [2024-12-14 18:46:02.569045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.943 [2024-12-14 18:46:02.569109] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.943 [2024-12-14 18:46:02.569116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.943 [2024-12-14 18:46:02.569120] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.569132] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.943 [2024-12-14 18:46:02.569148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.943 [2024-12-14 18:46:02.569174] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.943 [2024-12-14 18:46:02.569264] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.943 [2024-12-14 18:46:02.569271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.943 [2024-12-14 18:46:02.569275] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569279] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.569284] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:46.943 [2024-12-14 18:46:02.569294] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:46.943 [2024-12-14 18:46:02.569305] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569314] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.943 [2024-12-14 18:46:02.569321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.943 [2024-12-14 18:46:02.569342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.943 [2024-12-14 18:46:02.569404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.943 [2024-12-14 18:46:02.569411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.943 [2024-12-14 18:46:02.569415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569419] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.943 [2024-12-14 18:46:02.569430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569435] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.943 [2024-12-14 18:46:02.569439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.943 [2024-12-14 18:46:02.569446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.943 [2024-12-14 18:46:02.569466] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.569523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.569530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.569534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569538] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.569548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.569564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.569583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.569660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.569669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.569672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569677] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.569687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569696] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.569703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.569725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.569799] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.569806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.569810] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.569824] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569829] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569832] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.569840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.569859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.569920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.569927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.569930] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569934] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.569945] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.569953] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.569960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.569980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570081] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570250] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570441] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570444] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570459] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570467] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570559] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570569] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570584] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570588] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570592] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.944 [2024-12-14 18:46:02.570710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.944 [2024-12-14 18:46:02.570717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.944 [2024-12-14 18:46:02.570721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.944 [2024-12-14 18:46:02.570736] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570740] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.944 [2024-12-14 18:46:02.570744] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.944 [2024-12-14 18:46:02.570752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.944 [2024-12-14 18:46:02.570772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.570849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.570866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.570871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.570875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.570901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.570906] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.570910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.570918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.570940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.571006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.571013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.571017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.571032] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571036] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571040] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.571048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.571068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.571134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.571145] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.571150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571155] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.571166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571175] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.571182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.571204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.571301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.571308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.571327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571331] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.571341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.571357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.571376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.571441] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.571448] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.571452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.571466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571471] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571475] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.571482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.571501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.571557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.571564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.571568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571572] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.571582] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.571590] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.571597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.575671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.575711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.575719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.575722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.575727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.575740] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.575746] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.575749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdf9a80) 00:25:46.945 [2024-12-14 18:46:02.575758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.945 [2024-12-14 18:46:02.575783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe40140, cid 3, qid 0 00:25:46.945 [2024-12-14 18:46:02.575859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.945 [2024-12-14 18:46:02.575866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.945 [2024-12-14 18:46:02.575870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.945 [2024-12-14 18:46:02.575874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe40140) on tqpair=0xdf9a80 00:25:46.945 [2024-12-14 18:46:02.575881] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:46.945 00:25:46.945 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:46.945 [2024-12-14 18:46:02.620581] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:46.945 [2024-12-14 18:46:02.620657] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106826 ] 00:25:47.209 [2024-12-14 18:46:02.760676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:47.209 [2024-12-14 18:46:02.766726] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:47.209 [2024-12-14 18:46:02.766736] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:47.209 [2024-12-14 18:46:02.766749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:47.209 [2024-12-14 18:46:02.766759] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:47.209 [2024-12-14 18:46:02.767115] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:47.209 [2024-12-14 18:46:02.767192] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5e6a80 0 00:25:47.209 [2024-12-14 18:46:02.773648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:47.209 [2024-12-14 18:46:02.773674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:47.209 [2024-12-14 18:46:02.773681] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:47.209 [2024-12-14 18:46:02.773685] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:47.209 [2024-12-14 18:46:02.773720] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.773729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.773734] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.773748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:47.209 [2024-12-14 18:46:02.773780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.780663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.780683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.780688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780694] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.780705] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:47.209 [2024-12-14 18:46:02.780713] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:47.209 [2024-12-14 18:46:02.780720] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:47.209 [2024-12-14 18:46:02.780738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.780758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.780788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.780858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.780866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.780869] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.780896] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:47.209 [2024-12-14 18:46:02.780904] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:47.209 [2024-12-14 18:46:02.780912] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780916] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.780920] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.780928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.780949] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.781015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.781023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.781026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.781037] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:47.209 [2024-12-14 18:46:02.781045] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781053] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.781069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.781089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.781159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.781166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.781170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.781180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781195] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.781207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.781226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.781295] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.781302] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.781306] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.781316] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:47.209 [2024-12-14 18:46:02.781321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781435] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:47.209 [2024-12-14 18:46:02.781440] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781449] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781453] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781457] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.781465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.781485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.781560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.781567] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.781571] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781575] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.781581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:47.209 [2024-12-14 18:46:02.781591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781596] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781600] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.781608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.781642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.209 [2024-12-14 18:46:02.781717] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.209 [2024-12-14 18:46:02.781724] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.209 [2024-12-14 18:46:02.781728] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.209 [2024-12-14 18:46:02.781737] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:47.209 [2024-12-14 18:46:02.781743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:47.209 [2024-12-14 18:46:02.781751] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:47.209 [2024-12-14 18:46:02.781767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:47.209 [2024-12-14 18:46:02.781777] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.209 [2024-12-14 18:46:02.781782] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.209 [2024-12-14 18:46:02.781790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.209 [2024-12-14 18:46:02.781811] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.210 [2024-12-14 18:46:02.781937] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.210 [2024-12-14 18:46:02.781949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.210 [2024-12-14 18:46:02.781954] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.781958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=4096, cccid=0 00:25:47.210 [2024-12-14 18:46:02.781965] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62ccc0) on tqpair(0x5e6a80): expected_datao=0, payload_size=4096 00:25:47.210 [2024-12-14 18:46:02.781970] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.781978] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.781982] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.781992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.210 [2024-12-14 18:46:02.781999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.210 [2024-12-14 18:46:02.782002] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.210 [2024-12-14 18:46:02.782016] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:47.210 [2024-12-14 18:46:02.782021] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:47.210 [2024-12-14 18:46:02.782026] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:47.210 [2024-12-14 18:46:02.782031] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:47.210 [2024-12-14 18:46:02.782035] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:47.210 [2024-12-14 18:46:02.782041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:47.210 [2024-12-14 18:46:02.782096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.210 [2024-12-14 18:46:02.782167] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.210 [2024-12-14 18:46:02.782174] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.210 [2024-12-14 18:46:02.782178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.210 [2024-12-14 18:46:02.782190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782195] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.210 [2024-12-14 18:46:02.782212] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.210 [2024-12-14 18:46:02.782233] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782241] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.210 [2024-12-14 18:46:02.782253] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782262] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.210 [2024-12-14 18:46:02.782273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.210 [2024-12-14 18:46:02.782329] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ccc0, cid 0, qid 0 00:25:47.210 [2024-12-14 18:46:02.782337] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ce40, cid 1, qid 0 00:25:47.210 [2024-12-14 18:46:02.782342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62cfc0, cid 2, qid 0 00:25:47.210 [2024-12-14 18:46:02.782347] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.210 [2024-12-14 18:46:02.782351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.210 [2024-12-14 18:46:02.782456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.210 [2024-12-14 18:46:02.782474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.210 [2024-12-14 18:46:02.782480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.210 [2024-12-14 18:46:02.782490] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:47.210 [2024-12-14 18:46:02.782496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782516] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782533] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:47.210 [2024-12-14 18:46:02.782562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.210 [2024-12-14 18:46:02.782641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.210 [2024-12-14 18:46:02.782650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.210 [2024-12-14 18:46:02.782654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782659] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.210 [2024-12-14 18:46:02.782725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782750] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.782758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.210 [2024-12-14 18:46:02.782779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.210 [2024-12-14 18:46:02.782889] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.210 [2024-12-14 18:46:02.782897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.210 [2024-12-14 18:46:02.782901] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782905] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=4096, cccid=4 00:25:47.210 [2024-12-14 18:46:02.782910] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d2c0) on tqpair(0x5e6a80): expected_datao=0, payload_size=4096 00:25:47.210 [2024-12-14 18:46:02.782915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782922] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782926] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.210 [2024-12-14 18:46:02.782942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.210 [2024-12-14 18:46:02.782945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.210 [2024-12-14 18:46:02.782960] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:47.210 [2024-12-14 18:46:02.782974] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:47.210 [2024-12-14 18:46:02.782993] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.782998] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.210 [2024-12-14 18:46:02.783005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.210 [2024-12-14 18:46:02.783028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.210 [2024-12-14 18:46:02.783127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.210 [2024-12-14 18:46:02.783134] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.210 [2024-12-14 18:46:02.783138] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.783142] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=4096, cccid=4 00:25:47.210 [2024-12-14 18:46:02.783147] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d2c0) on tqpair(0x5e6a80): expected_datao=0, payload_size=4096 00:25:47.210 [2024-12-14 18:46:02.783152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.210 [2024-12-14 18:46:02.783159] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783163] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783181] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783213] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.783265] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.211 [2024-12-14 18:46:02.783355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.211 [2024-12-14 18:46:02.783362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.211 [2024-12-14 18:46:02.783366] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783369] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=4096, cccid=4 00:25:47.211 [2024-12-14 18:46:02.783374] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d2c0) on tqpair(0x5e6a80): expected_datao=0, payload_size=4096 00:25:47.211 [2024-12-14 18:46:02.783378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783385] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783389] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783403] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783440] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783463] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:47.211 [2024-12-14 18:46:02.783467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:47.211 [2024-12-14 18:46:02.783472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:47.211 [2024-12-14 18:46:02.783489] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.783508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.211 [2024-12-14 18:46:02.783544] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.211 [2024-12-14 18:46:02.783551] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d440, cid 5, qid 0 00:25:47.211 [2024-12-14 18:46:02.783672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783703] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783707] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783711] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d440) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783727] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.783756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d440, cid 5, qid 0 00:25:47.211 [2024-12-14 18:46:02.783821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783828] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783836] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d440) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783851] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.783877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d440, cid 5, qid 0 00:25:47.211 [2024-12-14 18:46:02.783955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.783962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.783966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d440) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.783981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.783986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.783993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.784011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d440, cid 5, qid 0 00:25:47.211 [2024-12-14 18:46:02.784093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.211 [2024-12-14 18:46:02.784100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.211 [2024-12-14 18:46:02.784103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d440) on tqpair=0x5e6a80 00:25:47.211 [2024-12-14 18:46:02.784126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.784139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.784147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.784157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.784164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.784174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.784185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5e6a80) 00:25:47.211 [2024-12-14 18:46:02.784196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.211 [2024-12-14 18:46:02.784217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d440, cid 5, qid 0 00:25:47.211 [2024-12-14 18:46:02.784224] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d2c0, cid 4, qid 0 00:25:47.211 [2024-12-14 18:46:02.784229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d5c0, cid 6, qid 0 00:25:47.211 [2024-12-14 18:46:02.784234] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d740, cid 7, qid 0 00:25:47.211 [2024-12-14 18:46:02.784403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.211 [2024-12-14 18:46:02.784410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.211 [2024-12-14 18:46:02.784414] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=8192, cccid=5 00:25:47.211 [2024-12-14 18:46:02.784423] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d440) on tqpair(0x5e6a80): expected_datao=0, payload_size=8192 00:25:47.211 [2024-12-14 18:46:02.784427] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784444] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784448] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.211 [2024-12-14 18:46:02.784454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.211 [2024-12-14 18:46:02.784460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.211 [2024-12-14 18:46:02.784464] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784468] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=512, cccid=4 00:25:47.212 [2024-12-14 18:46:02.784472] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d2c0) on tqpair(0x5e6a80): expected_datao=0, payload_size=512 00:25:47.212 [2024-12-14 18:46:02.784477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784483] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784486] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784492] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.212 [2024-12-14 18:46:02.784498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.212 [2024-12-14 18:46:02.784501] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784505] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=512, cccid=6 00:25:47.212 [2024-12-14 18:46:02.784510] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d5c0) on tqpair(0x5e6a80): expected_datao=0, payload_size=512 00:25:47.212 [2024-12-14 18:46:02.784514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784520] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784524] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.212 [2024-12-14 18:46:02.784535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.212 [2024-12-14 18:46:02.784539] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784542] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5e6a80): datao=0, datal=4096, cccid=7 00:25:47.212 [2024-12-14 18:46:02.784547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62d740) on tqpair(0x5e6a80): expected_datao=0, payload_size=4096 00:25:47.212 [2024-12-14 18:46:02.784551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784558] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784562] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.212 [2024-12-14 18:46:02.784575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.212 [2024-12-14 18:46:02.784580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.784584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d440) on tqpair=0x5e6a80 00:25:47.212 [2024-12-14 18:46:02.784602] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.212 [2024-12-14 18:46:02.784609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.212 [2024-12-14 18:46:02.787664] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.787673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d2c0) on tqpair=0x5e6a80 00:25:47.212 [2024-12-14 18:46:02.787691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.212 [2024-12-14 18:46:02.787699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.212 [2024-12-14 18:46:02.787703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.212 [2024-12-14 18:46:02.787707] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d5c0) on tqpair=0x5e6a80 00:25:47.212 [2024-12-14 18:46:02.787714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.212 [2024-12-14 18:46:02.787721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.212 [2024-12-14 18:46:02.787724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.212 ===================================================== 00:25:47.212 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.212 ===================================================== 00:25:47.212 Controller Capabilities/Features 00:25:47.212 ================================ 00:25:47.212 Vendor ID: 8086 00:25:47.212 Subsystem Vendor ID: 8086 00:25:47.212 Serial Number: SPDK00000000000001 00:25:47.212 Model Number: SPDK bdev Controller 00:25:47.212 Firmware Version: 24.09.1 00:25:47.212 Recommended Arb Burst: 6 00:25:47.212 IEEE OUI Identifier: e4 d2 5c 00:25:47.212 Multi-path I/O 00:25:47.212 May have multiple subsystem ports: Yes 00:25:47.212 May have multiple controllers: Yes 00:25:47.212 Associated with SR-IOV VF: No 00:25:47.212 Max Data Transfer Size: 131072 00:25:47.212 Max Number of Namespaces: 32 00:25:47.212 Max Number of I/O Queues: 127 00:25:47.212 NVMe Specification Version (VS): 1.3 00:25:47.212 NVMe Specification Version (Identify): 1.3 00:25:47.212 Maximum Queue Entries: 128 00:25:47.212 Contiguous Queues Required: Yes 00:25:47.212 Arbitration Mechanisms Supported 00:25:47.212 Weighted Round Robin: Not Supported 00:25:47.212 Vendor Specific: Not Supported 00:25:47.212 Reset Timeout: 15000 ms 00:25:47.212 Doorbell Stride: 4 bytes 00:25:47.212 NVM Subsystem Reset: Not Supported 00:25:47.212 Command Sets Supported 00:25:47.212 NVM Command Set: Supported 00:25:47.212 Boot Partition: Not Supported 00:25:47.212 Memory Page Size Minimum: 4096 bytes 00:25:47.212 Memory Page Size Maximum: 4096 bytes 00:25:47.212 Persistent Memory Region: Not Supported 00:25:47.212 Optional Asynchronous Events Supported 00:25:47.212 Namespace Attribute Notices: Supported 00:25:47.212 Firmware Activation Notices: Not Supported 00:25:47.212 ANA Change Notices: Not Supported 00:25:47.212 PLE Aggregate Log Change Notices: Not Supported 00:25:47.212 LBA Status Info Alert Notices: Not Supported 00:25:47.212 EGE Aggregate Log Change Notices: Not Supported 00:25:47.212 Normal NVM Subsystem Shutdown event: Not Supported 00:25:47.212 Zone Descriptor Change Notices: Not Supported 00:25:47.212 Discovery Log Change Notices: Not Supported 00:25:47.212 Controller Attributes 00:25:47.212 128-bit Host Identifier: Supported 00:25:47.212 Non-Operational Permissive Mode: Not Supported 00:25:47.212 NVM Sets: Not Supported 00:25:47.212 Read Recovery Levels: Not Supported 00:25:47.212 Endurance Groups: Not Supported 00:25:47.212 Predictable Latency Mode: Not Supported 00:25:47.212 Traffic Based Keep ALive: Not Supported 00:25:47.212 Namespace Granularity: Not Supported 00:25:47.212 SQ Associations: Not Supported 00:25:47.212 UUID List: Not Supported 00:25:47.212 Multi-Domain Subsystem: Not Supported 00:25:47.212 Fixed Capacity Management: Not Supported 00:25:47.212 Variable Capacity Management: Not Supported 00:25:47.212 Delete Endurance Group: Not Supported 00:25:47.212 Delete NVM Set: Not Supported 00:25:47.212 Extended LBA Formats Supported: Not Supported 00:25:47.212 Flexible Data Placement Supported: Not Supported 00:25:47.212 00:25:47.212 Controller Memory Buffer Support 00:25:47.212 ================================ 00:25:47.212 Supported: No 00:25:47.212 00:25:47.212 Persistent Memory Region Support 00:25:47.212 ================================ 00:25:47.212 Supported: No 00:25:47.212 00:25:47.212 Admin Command Set Attributes 00:25:47.212 ============================ 00:25:47.212 Security Send/Receive: Not Supported 00:25:47.212 Format NVM: Not Supported 00:25:47.212 Firmware Activate/Download: Not Supported 00:25:47.212 Namespace Management: Not Supported 00:25:47.212 Device Self-Test: Not Supported 00:25:47.212 Directives: Not Supported 00:25:47.212 NVMe-MI: Not Supported 00:25:47.212 Virtualization Management: Not Supported 00:25:47.212 Doorbell Buffer Config: Not Supported 00:25:47.212 Get LBA Status Capability: Not Supported 00:25:47.212 Command & Feature Lockdown Capability: Not Supported 00:25:47.212 Abort Command Limit: 4 00:25:47.212 Async Event Request Limit: 4 00:25:47.212 Number of Firmware Slots: N/A 00:25:47.212 Firmware Slot 1 Read-Only: N/A 00:25:47.212 Firmware Activation Without Reset: N/A 00:25:47.212 Multiple Update Detection Support: N/A 00:25:47.212 Firmware Update Granularity: No Information Provided 00:25:47.212 Per-Namespace SMART Log: No 00:25:47.212 Asymmetric Namespace Access Log Page: Not Supported 00:25:47.212 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:47.212 Command Effects Log Page: Supported 00:25:47.212 Get Log Page Extended Data: Supported 00:25:47.212 Telemetry Log Pages: Not Supported 00:25:47.212 Persistent Event Log Pages: Not Supported 00:25:47.212 Supported Log Pages Log Page: May Support 00:25:47.212 Commands Supported & Effects Log Page: Not Supported 00:25:47.212 Feature Identifiers & Effects Log Page:May Support 00:25:47.212 NVMe-MI Commands & Effects Log Page: May Support 00:25:47.212 Data Area 4 for Telemetry Log: Not Supported 00:25:47.212 Error Log Page Entries Supported: 128 00:25:47.212 Keep Alive: Supported 00:25:47.212 Keep Alive Granularity: 10000 ms 00:25:47.212 00:25:47.212 NVM Command Set Attributes 00:25:47.212 ========================== 00:25:47.212 Submission Queue Entry Size 00:25:47.212 Max: 64 00:25:47.212 Min: 64 00:25:47.212 Completion Queue Entry Size 00:25:47.212 Max: 16 00:25:47.212 Min: 16 00:25:47.212 Number of Namespaces: 32 00:25:47.212 Compare Command: Supported 00:25:47.212 Write Uncorrectable Command: Not Supported 00:25:47.212 Dataset Management Command: Supported 00:25:47.212 Write Zeroes Command: Supported 00:25:47.212 Set Features Save Field: Not Supported 00:25:47.213 Reservations: Supported 00:25:47.213 Timestamp: Not Supported 00:25:47.213 Copy: Supported 00:25:47.213 Volatile Write Cache: Present 00:25:47.213 Atomic Write Unit (Normal): 1 00:25:47.213 Atomic Write Unit (PFail): 1 00:25:47.213 Atomic Compare & Write Unit: 1 00:25:47.213 Fused Compare & Write: Supported 00:25:47.213 Scatter-Gather List 00:25:47.213 SGL Command Set: Supported 00:25:47.213 SGL Keyed: Supported 00:25:47.213 SGL Bit Bucket Descriptor: Not Supported 00:25:47.213 SGL Metadata Pointer: Not Supported 00:25:47.213 Oversized SGL: Not Supported 00:25:47.213 SGL Metadata Address: Not Supported 00:25:47.213 SGL Offset: Supported 00:25:47.213 Transport SGL Data Block: Not Supported 00:25:47.213 Replay Protected Memory Block: Not Supported 00:25:47.213 00:25:47.213 Firmware Slot Information 00:25:47.213 ========================= 00:25:47.213 Active slot: 1 00:25:47.213 Slot 1 Firmware Revision: 24.09.1 00:25:47.213 00:25:47.213 00:25:47.213 Commands Supported and Effects 00:25:47.213 ============================== 00:25:47.213 Admin Commands 00:25:47.213 -------------- 00:25:47.213 Get Log Page (02h): Supported 00:25:47.213 Identify (06h): Supported 00:25:47.213 Abort (08h): Supported 00:25:47.213 Set Features (09h): Supported 00:25:47.213 Get Features (0Ah): Supported 00:25:47.213 Asynchronous Event Request (0Ch): Supported 00:25:47.213 Keep Alive (18h): Supported 00:25:47.213 I/O Commands 00:25:47.213 ------------ 00:25:47.213 Flush (00h): Supported LBA-Change 00:25:47.213 Write (01h): Supported LBA-Change 00:25:47.213 Read (02h): Supported 00:25:47.213 Compare (05h): Supported 00:25:47.213 Write Zeroes (08h): Supported LBA-Change 00:25:47.213 Dataset Management (09h): Supported LBA-Change 00:25:47.213 Copy (19h): Supported LBA-Change 00:25:47.213 00:25:47.213 Error Log 00:25:47.213 ========= 00:25:47.213 00:25:47.213 Arbitration 00:25:47.213 =========== 00:25:47.213 Arbitration Burst: 1 00:25:47.213 00:25:47.213 Power Management 00:25:47.213 ================ 00:25:47.213 Number of Power States: 1 00:25:47.213 Current Power State: Power State #0 00:25:47.213 Power State #0: 00:25:47.213 Max Power: 0.00 W 00:25:47.213 Non-Operational State: Operational 00:25:47.213 Entry Latency: Not Reported 00:25:47.213 Exit Latency: Not Reported 00:25:47.213 Relative Read Throughput: 0 00:25:47.213 Relative Read Latency: 0 00:25:47.213 Relative Write Throughput: 0 00:25:47.213 Relative Write Latency: 0 00:25:47.213 Idle Power: Not Reported 00:25:47.213 Active Power: Not Reported 00:25:47.213 Non-Operational Permissive Mode: Not Supported 00:25:47.213 00:25:47.213 Health Information 00:25:47.213 ================== 00:25:47.213 Critical Warnings: 00:25:47.213 Available Spare Space: OK 00:25:47.213 Temperature: OK 00:25:47.213 Device Reliability: OK 00:25:47.213 Read Only: No 00:25:47.213 Volatile Memory Backup: OK 00:25:47.213 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:47.213 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:47.213 Available Spare: 0% 00:25:47.213 Available Spare Threshold: 0% 00:25:47.213 Life Percentage U[2024-12-14 18:46:02.787728] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d740) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.787847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.787855] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5e6a80) 00:25:47.213 [2024-12-14 18:46:02.787864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.213 [2024-12-14 18:46:02.787894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d740, cid 7, qid 0 00:25:47.213 [2024-12-14 18:46:02.787983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.213 [2024-12-14 18:46:02.787990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.213 [2024-12-14 18:46:02.787994] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.787999] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d740) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788076] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:47.213 [2024-12-14 18:46:02.788093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ccc0) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.213 [2024-12-14 18:46:02.788107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ce40) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.213 [2024-12-14 18:46:02.788117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62cfc0) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.213 [2024-12-14 18:46:02.788127] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.213 [2024-12-14 18:46:02.788141] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788145] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.213 [2024-12-14 18:46:02.788157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.213 [2024-12-14 18:46:02.788185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.213 [2024-12-14 18:46:02.788266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.213 [2024-12-14 18:46:02.788273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.213 [2024-12-14 18:46:02.788277] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.213 [2024-12-14 18:46:02.788305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.213 [2024-12-14 18:46:02.788327] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.213 [2024-12-14 18:46:02.788417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.213 [2024-12-14 18:46:02.788424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.213 [2024-12-14 18:46:02.788427] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788431] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788436] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:47.213 [2024-12-14 18:46:02.788441] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:47.213 [2024-12-14 18:46:02.788451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788456] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788460] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.213 [2024-12-14 18:46:02.788467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.213 [2024-12-14 18:46:02.788486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.213 [2024-12-14 18:46:02.788557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.213 [2024-12-14 18:46:02.788564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.213 [2024-12-14 18:46:02.788567] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788572] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.213 [2024-12-14 18:46:02.788583] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.213 [2024-12-14 18:46:02.788587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.788621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.788662] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.788731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.788738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.788742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.788757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.788774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.788793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.788856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.788865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.788868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.788884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.788893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.788900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.788920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.788992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.788999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789032] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789037] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789041] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789066] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789132] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789138] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789142] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789146] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789161] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789165] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789275] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789514] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789518] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789522] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789533] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789569] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789659] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789784] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789809] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789813] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.789900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.789907] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.789910] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.789925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.789934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.789941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.789960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.790031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.790038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.790042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.790046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.214 [2024-12-14 18:46:02.790056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.790061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.214 [2024-12-14 18:46:02.790064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.214 [2024-12-14 18:46:02.790072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.214 [2024-12-14 18:46:02.790090] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.214 [2024-12-14 18:46:02.790157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.214 [2024-12-14 18:46:02.790163] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.214 [2024-12-14 18:46:02.790167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790216] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.790282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.790289] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.790292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790297] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.790405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.790412] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.790415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790419] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790429] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.790530] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.790538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.790542] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790546] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790556] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790565] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.790692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.790701] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.790704] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790709] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790720] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.790844] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.790860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.790865] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.790893] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.790903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.790911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.790932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.791004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.791011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.791015] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791019] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.791030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791038] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.791046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.791065] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.791134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.791141] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.791145] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.791160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.791176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.791194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.791263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.791278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.791282] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791287] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.791298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791307] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.791315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.791334] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.791400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.791421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.791426] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.791442] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791451] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.791458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.791478] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.791542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.791569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.791573] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.791588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.791593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.795653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5e6a80) 00:25:47.215 [2024-12-14 18:46:02.795675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.215 [2024-12-14 18:46:02.795702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62d140, cid 3, qid 0 00:25:47.215 [2024-12-14 18:46:02.795776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.215 [2024-12-14 18:46:02.795784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.215 [2024-12-14 18:46:02.795787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.215 [2024-12-14 18:46:02.795792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62d140) on tqpair=0x5e6a80 00:25:47.215 [2024-12-14 18:46:02.795801] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:47.215 sed: 0% 00:25:47.215 Data Units Read: 0 00:25:47.215 Data Units Written: 0 00:25:47.215 Host Read Commands: 0 00:25:47.215 Host Write Commands: 0 00:25:47.215 Controller Busy Time: 0 minutes 00:25:47.215 Power Cycles: 0 00:25:47.215 Power On Hours: 0 hours 00:25:47.215 Unsafe Shutdowns: 0 00:25:47.215 Unrecoverable Media Errors: 0 00:25:47.216 Lifetime Error Log Entries: 0 00:25:47.216 Warning Temperature Time: 0 minutes 00:25:47.216 Critical Temperature Time: 0 minutes 00:25:47.216 00:25:47.216 Number of Queues 00:25:47.216 ================ 00:25:47.216 Number of I/O Submission Queues: 127 00:25:47.216 Number of I/O Completion Queues: 127 00:25:47.216 00:25:47.216 Active Namespaces 00:25:47.216 ================= 00:25:47.216 Namespace ID:1 00:25:47.216 Error Recovery Timeout: Unlimited 00:25:47.216 Command Set Identifier: NVM (00h) 00:25:47.216 Deallocate: Supported 00:25:47.216 Deallocated/Unwritten Error: Not Supported 00:25:47.216 Deallocated Read Value: Unknown 00:25:47.216 Deallocate in Write Zeroes: Not Supported 00:25:47.216 Deallocated Guard Field: 0xFFFF 00:25:47.216 Flush: Supported 00:25:47.216 Reservation: Supported 00:25:47.216 Namespace Sharing Capabilities: Multiple Controllers 00:25:47.216 Size (in LBAs): 131072 (0GiB) 00:25:47.216 Capacity (in LBAs): 131072 (0GiB) 00:25:47.216 Utilization (in LBAs): 131072 (0GiB) 00:25:47.216 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:47.216 EUI64: ABCDEF0123456789 00:25:47.216 UUID: abacb9aa-884a-4706-8bc5-d1a12dffb560 00:25:47.216 Thin Provisioning: Not Supported 00:25:47.216 Per-NS Atomic Units: Yes 00:25:47.216 Atomic Boundary Size (Normal): 0 00:25:47.216 Atomic Boundary Size (PFail): 0 00:25:47.216 Atomic Boundary Offset: 0 00:25:47.216 Maximum Single Source Range Length: 65535 00:25:47.216 Maximum Copy Length: 65535 00:25:47.216 Maximum Source Range Count: 1 00:25:47.216 NGUID/EUI64 Never Reused: No 00:25:47.216 Namespace Write Protected: No 00:25:47.216 Number of LBA Formats: 1 00:25:47.216 Current LBA Format: LBA Format #00 00:25:47.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:47.216 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.216 rmmod nvme_tcp 00:25:47.216 rmmod nvme_fabrics 00:25:47.216 rmmod nvme_keyring 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 106780 ']' 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 106780 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 106780 ']' 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 106780 00:25:47.216 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106780 00:25:47.475 killing process with pid 106780 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106780' 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 106780 00:25:47.475 18:46:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 106780 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.734 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:25:47.993 00:25:47.993 real 0m2.494s 00:25:47.993 user 0m5.203s 00:25:47.993 sys 0m0.826s 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.993 ************************************ 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.993 END TEST nvmf_identify 00:25:47.993 ************************************ 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.993 ************************************ 00:25:47.993 START TEST nvmf_perf 00:25:47.993 ************************************ 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:47.993 * Looking for test storage... 00:25:47.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:47.993 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.253 --rc genhtml_branch_coverage=1 00:25:48.253 --rc genhtml_function_coverage=1 00:25:48.253 --rc genhtml_legend=1 00:25:48.253 --rc geninfo_all_blocks=1 00:25:48.253 --rc geninfo_unexecuted_blocks=1 00:25:48.253 00:25:48.253 ' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.253 --rc genhtml_branch_coverage=1 00:25:48.253 --rc genhtml_function_coverage=1 00:25:48.253 --rc genhtml_legend=1 00:25:48.253 --rc geninfo_all_blocks=1 00:25:48.253 --rc geninfo_unexecuted_blocks=1 00:25:48.253 00:25:48.253 ' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.253 --rc genhtml_branch_coverage=1 00:25:48.253 --rc genhtml_function_coverage=1 00:25:48.253 --rc genhtml_legend=1 00:25:48.253 --rc geninfo_all_blocks=1 00:25:48.253 --rc geninfo_unexecuted_blocks=1 00:25:48.253 00:25:48.253 ' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.253 --rc genhtml_branch_coverage=1 00:25:48.253 --rc genhtml_function_coverage=1 00:25:48.253 --rc genhtml_legend=1 00:25:48.253 --rc geninfo_all_blocks=1 00:25:48.253 --rc geninfo_unexecuted_blocks=1 00:25:48.253 00:25:48.253 ' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.253 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.253 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:48.254 Cannot find device "nvmf_init_br" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:48.254 Cannot find device "nvmf_init_br2" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:48.254 Cannot find device "nvmf_tgt_br" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.254 Cannot find device "nvmf_tgt_br2" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:48.254 Cannot find device "nvmf_init_br" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:48.254 Cannot find device "nvmf_init_br2" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:48.254 Cannot find device "nvmf_tgt_br" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:48.254 Cannot find device "nvmf_tgt_br2" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:48.254 Cannot find device "nvmf_br" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:48.254 Cannot find device "nvmf_init_if" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:48.254 Cannot find device "nvmf_init_if2" 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:48.254 18:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:48.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:48.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:25:48.513 00:25:48.513 --- 10.0.0.3 ping statistics --- 00:25:48.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.513 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:48.513 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:48.513 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:25:48.513 00:25:48.513 --- 10.0.0.4 ping statistics --- 00:25:48.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.513 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:48.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:48.513 00:25:48.513 --- 10.0.0.1 ping statistics --- 00:25:48.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.513 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:48.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:25:48.513 00:25:48.513 --- 10.0.0.2 ping statistics --- 00:25:48.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.513 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=107041 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 107041 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 107041 ']' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.513 18:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.772 [2024-12-14 18:46:04.314809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:48.772 [2024-12-14 18:46:04.314973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.772 [2024-12-14 18:46:04.456980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.031 [2024-12-14 18:46:04.552157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.031 [2024-12-14 18:46:04.552221] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.031 [2024-12-14 18:46:04.552232] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.031 [2024-12-14 18:46:04.552240] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.031 [2024-12-14 18:46:04.552247] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.031 [2024-12-14 18:46:04.552443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.031 [2024-12-14 18:46:04.552589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.031 [2024-12-14 18:46:04.552921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.031 [2024-12-14 18:46:04.552925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:49.970 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:25:50.229 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:25:50.229 18:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:50.487 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:25:50.487 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:50.745 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:50.745 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:25:50.745 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:50.745 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:50.745 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:51.313 [2024-12-14 18:46:06.769694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.313 18:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.571 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:51.571 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.829 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:51.829 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:52.087 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:52.346 [2024-12-14 18:46:07.904481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:52.346 18:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:52.614 18:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:52.614 18:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:52.614 18:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:52.614 18:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:53.584 Initializing NVMe Controllers 00:25:53.584 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:53.584 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:25:53.584 Initialization complete. Launching workers. 00:25:53.584 ======================================================== 00:25:53.584 Latency(us) 00:25:53.584 Device Information : IOPS MiB/s Average min max 00:25:53.584 PCIE (0000:00:10.0) NSID 1 from core 0: 21154.00 82.63 1512.25 251.77 8983.00 00:25:53.584 ======================================================== 00:25:53.584 Total : 21154.00 82.63 1512.25 251.77 8983.00 00:25:53.584 00:25:53.843 18:46:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:55.219 Initializing NVMe Controllers 00:25:55.219 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.219 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.219 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:55.219 Initialization complete. Launching workers. 00:25:55.219 ======================================================== 00:25:55.219 Latency(us) 00:25:55.219 Device Information : IOPS MiB/s Average min max 00:25:55.219 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2495.00 9.75 400.43 122.01 7119.64 00:25:55.219 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.01 5297.00 12048.45 00:25:55.219 ======================================================== 00:25:55.219 Total : 2619.00 10.23 765.92 122.01 12048.45 00:25:55.219 00:25:55.219 18:46:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:56.596 Initializing NVMe Controllers 00:25:56.596 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.596 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:56.596 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:56.596 Initialization complete. Launching workers. 00:25:56.596 ======================================================== 00:25:56.596 Latency(us) 00:25:56.596 Device Information : IOPS MiB/s Average min max 00:25:56.596 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7957.50 31.08 4024.90 815.32 9614.34 00:25:56.596 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2717.78 10.62 11940.51 5267.64 23910.54 00:25:56.596 ======================================================== 00:25:56.596 Total : 10675.28 41.70 6040.10 815.32 23910.54 00:25:56.596 00:25:56.596 18:46:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:25:56.596 18:46:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:59.131 Initializing NVMe Controllers 00:25:59.131 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.131 Controller IO queue size 128, less than required. 00:25:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.131 Controller IO queue size 128, less than required. 00:25:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.131 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.131 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:59.131 Initialization complete. Launching workers. 00:25:59.131 ======================================================== 00:25:59.131 Latency(us) 00:25:59.131 Device Information : IOPS MiB/s Average min max 00:25:59.131 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1495.40 373.85 87159.30 64585.26 163370.80 00:25:59.131 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.77 144.94 232425.84 83687.12 365372.95 00:25:59.131 ======================================================== 00:25:59.131 Total : 2075.16 518.79 127744.36 64585.26 365372.95 00:25:59.131 00:25:59.131 18:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:25:59.390 Initializing NVMe Controllers 00:25:59.390 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.390 Controller IO queue size 128, less than required. 00:25:59.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:59.390 Controller IO queue size 128, less than required. 00:25:59.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:25:59.390 WARNING: Some requested NVMe devices were skipped 00:25:59.390 No valid NVMe controllers or AIO or URING devices found 00:25:59.390 18:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:26:01.922 Initializing NVMe Controllers 00:26:01.922 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.922 Controller IO queue size 128, less than required. 00:26:01.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.922 Controller IO queue size 128, less than required. 00:26:01.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.922 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.922 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:01.922 Initialization complete. Launching workers. 00:26:01.922 00:26:01.922 ==================== 00:26:01.922 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:01.922 TCP transport: 00:26:01.922 polls: 8246 00:26:01.922 idle_polls: 5311 00:26:01.922 sock_completions: 2935 00:26:01.922 nvme_completions: 4273 00:26:01.922 submitted_requests: 6496 00:26:01.922 queued_requests: 1 00:26:01.922 00:26:01.922 ==================== 00:26:01.922 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:01.922 TCP transport: 00:26:01.922 polls: 8621 00:26:01.922 idle_polls: 5829 00:26:01.922 sock_completions: 2792 00:26:01.922 nvme_completions: 5395 00:26:01.922 submitted_requests: 8018 00:26:01.922 queued_requests: 1 00:26:01.922 ======================================================== 00:26:01.922 Latency(us) 00:26:01.922 Device Information : IOPS MiB/s Average min max 00:26:01.922 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1067.88 266.97 122887.34 72712.30 200098.41 00:26:01.922 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1348.35 337.09 96702.54 51818.25 145903.65 00:26:01.922 ======================================================== 00:26:01.922 Total : 2416.23 604.06 108275.22 51818.25 200098.41 00:26:01.922 00:26:01.922 18:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:01.922 18:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.180 18:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:02.180 18:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:26:02.180 18:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=cc361c7f-1fba-483f-a660-5f9abda7ce7e 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb cc361c7f-1fba-483f-a660-5f9abda7ce7e 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=cc361c7f-1fba-483f-a660-5f9abda7ce7e 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:26:02.748 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:03.007 { 00:26:03.007 "base_bdev": "Nvme0n1", 00:26:03.007 "block_size": 4096, 00:26:03.007 "cluster_size": 4194304, 00:26:03.007 "free_clusters": 1278, 00:26:03.007 "name": "lvs_0", 00:26:03.007 "total_data_clusters": 1278, 00:26:03.007 "uuid": "cc361c7f-1fba-483f-a660-5f9abda7ce7e" 00:26:03.007 } 00:26:03.007 ]' 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cc361c7f-1fba-483f-a660-5f9abda7ce7e") .free_clusters' 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cc361c7f-1fba-483f-a660-5f9abda7ce7e") .cluster_size' 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:26:03.007 5112 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:26:03.007 18:46:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cc361c7f-1fba-483f-a660-5f9abda7ce7e lbd_0 5112 00:26:03.575 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8e2c098e-d461-48bf-bf5b-38c35c6cf61d 00:26:03.575 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8e2c098e-d461-48bf-bf5b-38c35c6cf61d lvs_n_0 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:26:03.833 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:04.092 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:04.092 { 00:26:04.092 "base_bdev": "Nvme0n1", 00:26:04.092 "block_size": 4096, 00:26:04.092 "cluster_size": 4194304, 00:26:04.092 "free_clusters": 0, 00:26:04.092 "name": "lvs_0", 00:26:04.092 "total_data_clusters": 1278, 00:26:04.092 "uuid": "cc361c7f-1fba-483f-a660-5f9abda7ce7e" 00:26:04.092 }, 00:26:04.092 { 00:26:04.092 "base_bdev": "8e2c098e-d461-48bf-bf5b-38c35c6cf61d", 00:26:04.092 "block_size": 4096, 00:26:04.092 "cluster_size": 4194304, 00:26:04.092 "free_clusters": 1276, 00:26:04.092 "name": "lvs_n_0", 00:26:04.092 "total_data_clusters": 1276, 00:26:04.092 "uuid": "5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac" 00:26:04.092 } 00:26:04.092 ]' 00:26:04.092 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac") .free_clusters' 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac") .cluster_size' 00:26:04.351 5104 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:26:04.351 18:46:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5a36ff7a-d05e-4c35-9598-1f08fdc6c5ac lbd_nest_0 5104 00:26:04.610 18:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a1134e6f-12a6-43c9-bcfe-ee9353593464 00:26:04.610 18:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.869 18:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:04.869 18:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a1134e6f-12a6-43c9-bcfe-ee9353593464 00:26:05.128 18:46:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:05.387 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:05.387 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:05.387 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:05.387 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:05.387 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:05.953 Initializing NVMe Controllers 00:26:05.953 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.953 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:05.953 WARNING: Some requested NVMe devices were skipped 00:26:05.953 No valid NVMe controllers or AIO or URING devices found 00:26:05.953 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:05.953 18:46:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:18.166 Initializing NVMe Controllers 00:26:18.166 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.166 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:18.166 Initialization complete. Launching workers. 00:26:18.166 ======================================================== 00:26:18.166 Latency(us) 00:26:18.166 Device Information : IOPS MiB/s Average min max 00:26:18.166 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 809.53 101.19 1234.97 379.03 8375.40 00:26:18.166 ======================================================== 00:26:18.166 Total : 809.53 101.19 1234.97 379.03 8375.40 00:26:18.166 00:26:18.166 18:46:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:18.166 18:46:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:18.166 18:46:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:18.166 Initializing NVMe Controllers 00:26:18.166 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.166 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:18.166 WARNING: Some requested NVMe devices were skipped 00:26:18.166 No valid NVMe controllers or AIO or URING devices found 00:26:18.166 18:46:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:18.166 18:46:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:28.156 Initializing NVMe Controllers 00:26:28.156 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.156 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.156 Initialization complete. Launching workers. 00:26:28.156 ======================================================== 00:26:28.156 Latency(us) 00:26:28.156 Device Information : IOPS MiB/s Average min max 00:26:28.156 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1025.38 128.17 31234.15 7856.25 286799.18 00:26:28.156 ======================================================== 00:26:28.156 Total : 1025.38 128.17 31234.15 7856.25 286799.18 00:26:28.156 00:26:28.156 18:46:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:28.156 18:46:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:28.156 18:46:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:28.156 Initializing NVMe Controllers 00:26:28.156 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.156 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:28.156 WARNING: Some requested NVMe devices were skipped 00:26:28.156 No valid NVMe controllers or AIO or URING devices found 00:26:28.156 18:46:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:28.156 18:46:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:38.136 Initializing NVMe Controllers 00:26:38.136 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:38.136 Controller IO queue size 128, less than required. 00:26:38.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:38.136 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:38.136 Initialization complete. Launching workers. 00:26:38.136 ======================================================== 00:26:38.136 Latency(us) 00:26:38.136 Device Information : IOPS MiB/s Average min max 00:26:38.136 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3654.02 456.75 35070.19 13653.31 72741.25 00:26:38.136 ======================================================== 00:26:38.136 Total : 3654.02 456.75 35070.19 13653.31 72741.25 00:26:38.136 00:26:38.136 18:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.136 18:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a1134e6f-12a6-43c9-bcfe-ee9353593464 00:26:38.136 18:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:38.395 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8e2c098e-d461-48bf-bf5b-38c35c6cf61d 00:26:38.654 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.913 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.913 rmmod nvme_tcp 00:26:38.913 rmmod nvme_fabrics 00:26:39.172 rmmod nvme_keyring 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 107041 ']' 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 107041 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 107041 ']' 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 107041 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107041 00:26:39.172 killing process with pid 107041 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107041' 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 107041 00:26:39.172 18:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 107041 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:40.549 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:26:40.808 00:26:40.808 real 0m52.793s 00:26:40.808 user 3m19.683s 00:26:40.808 sys 0m10.880s 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:40.808 ************************************ 00:26:40.808 END TEST nvmf_perf 00:26:40.808 ************************************ 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.808 ************************************ 00:26:40.808 START TEST nvmf_fio_host 00:26:40.808 ************************************ 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:40.808 * Looking for test storage... 00:26:40.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:40.808 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.069 --rc genhtml_branch_coverage=1 00:26:41.069 --rc genhtml_function_coverage=1 00:26:41.069 --rc genhtml_legend=1 00:26:41.069 --rc geninfo_all_blocks=1 00:26:41.069 --rc geninfo_unexecuted_blocks=1 00:26:41.069 00:26:41.069 ' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.069 --rc genhtml_branch_coverage=1 00:26:41.069 --rc genhtml_function_coverage=1 00:26:41.069 --rc genhtml_legend=1 00:26:41.069 --rc geninfo_all_blocks=1 00:26:41.069 --rc geninfo_unexecuted_blocks=1 00:26:41.069 00:26:41.069 ' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.069 --rc genhtml_branch_coverage=1 00:26:41.069 --rc genhtml_function_coverage=1 00:26:41.069 --rc genhtml_legend=1 00:26:41.069 --rc geninfo_all_blocks=1 00:26:41.069 --rc geninfo_unexecuted_blocks=1 00:26:41.069 00:26:41.069 ' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.069 --rc genhtml_branch_coverage=1 00:26:41.069 --rc genhtml_function_coverage=1 00:26:41.069 --rc genhtml_legend=1 00:26:41.069 --rc geninfo_all_blocks=1 00:26:41.069 --rc geninfo_unexecuted_blocks=1 00:26:41.069 00:26:41.069 ' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.069 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.070 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:41.070 Cannot find device "nvmf_init_br" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:41.070 Cannot find device "nvmf_init_br2" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:41.070 Cannot find device "nvmf_tgt_br" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.070 Cannot find device "nvmf_tgt_br2" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:41.070 Cannot find device "nvmf_init_br" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:41.070 Cannot find device "nvmf_init_br2" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:41.070 Cannot find device "nvmf_tgt_br" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:41.070 Cannot find device "nvmf_tgt_br2" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:41.070 Cannot find device "nvmf_br" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:41.070 Cannot find device "nvmf_init_if" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:41.070 Cannot find device "nvmf_init_if2" 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.070 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.329 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.330 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.330 18:46:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:41.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:26:41.330 00:26:41.330 --- 10.0.0.3 ping statistics --- 00:26:41.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.330 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:41.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:41.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:26:41.330 00:26:41.330 --- 10.0.0.4 ping statistics --- 00:26:41.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.330 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:26:41.330 00:26:41.330 --- 10.0.0.1 ping statistics --- 00:26:41.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.330 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:41.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:26:41.330 00:26:41.330 --- 10.0.0.2 ping statistics --- 00:26:41.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.330 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=108070 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 108070 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 108070 ']' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.330 18:46:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.589 [2024-12-14 18:46:57.136698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:41.589 [2024-12-14 18:46:57.136935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.589 [2024-12-14 18:46:57.283692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.848 [2024-12-14 18:46:57.388016] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.848 [2024-12-14 18:46:57.388267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.848 [2024-12-14 18:46:57.388293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.848 [2024-12-14 18:46:57.388305] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.848 [2024-12-14 18:46:57.388314] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.848 [2024-12-14 18:46:57.388902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.848 [2024-12-14 18:46:57.389020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.848 [2024-12-14 18:46:57.389127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.848 [2024-12-14 18:46:57.389117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:42.785 [2024-12-14 18:46:58.455593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.785 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:43.353 Malloc1 00:26:43.353 18:46:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.611 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.870 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:44.129 [2024-12-14 18:46:59.685962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:44.129 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.388 18:46:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:44.388 18:47:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:44.647 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:44.647 fio-3.35 00:26:44.647 Starting 1 thread 00:26:47.180 00:26:47.180 test: (groupid=0, jobs=1): err= 0: pid=108201: Sat Dec 14 18:47:02 2024 00:26:47.180 read: IOPS=8748, BW=34.2MiB/s (35.8MB/s)(68.6MiB/2007msec) 00:26:47.180 slat (nsec): min=1706, max=353121, avg=2965.40, stdev=4208.00 00:26:47.180 clat (usec): min=3593, max=14000, avg=7680.72, stdev=759.38 00:26:47.180 lat (usec): min=3634, max=14003, avg=7683.68, stdev=759.21 00:26:47.180 clat percentiles (usec): 00:26:47.180 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:26:47.180 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:26:47.180 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8979], 00:26:47.180 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[12256], 99.95th=[13566], 00:26:47.180 | 99.99th=[13960] 00:26:47.180 bw ( KiB/s): min=32392, max=36760, per=99.96%, avg=34980.00, stdev=1859.01, samples=4 00:26:47.180 iops : min= 8098, max= 9190, avg=8745.00, stdev=464.75, samples=4 00:26:47.180 write: IOPS=8748, BW=34.2MiB/s (35.8MB/s)(68.6MiB/2007msec); 0 zone resets 00:26:47.180 slat (nsec): min=1800, max=271198, avg=3093.49, stdev=3063.85 00:26:47.180 clat (usec): min=2502, max=12834, avg=6880.48, stdev=673.85 00:26:47.180 lat (usec): min=2515, max=12837, avg=6883.58, stdev=673.68 00:26:47.180 clat percentiles (usec): 00:26:47.180 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6325], 00:26:47.180 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:26:47.180 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:26:47.180 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10814], 99.95th=[11338], 00:26:47.180 | 99.99th=[12780] 00:26:47.180 bw ( KiB/s): min=33240, max=36672, per=100.00%, avg=34998.00, stdev=1470.63, samples=4 00:26:47.180 iops : min= 8310, max= 9168, avg=8749.50, stdev=367.66, samples=4 00:26:47.181 lat (msec) : 4=0.07%, 10=99.62%, 20=0.31% 00:26:47.181 cpu : usr=59.92%, sys=29.36%, ctx=9, majf=0, minf=8 00:26:47.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:47.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.181 issued rwts: total=17558,17558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.181 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.181 00:26:47.181 Run status group 0 (all jobs): 00:26:47.181 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.6MiB (71.9MB), run=2007-2007msec 00:26:47.181 WRITE: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.6MiB (71.9MB), run=2007-2007msec 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:47.181 18:47:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:47.181 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:47.181 fio-3.35 00:26:47.181 Starting 1 thread 00:26:49.712 00:26:49.712 test: (groupid=0, jobs=1): err= 0: pid=108244: Sat Dec 14 18:47:05 2024 00:26:49.712 read: IOPS=7308, BW=114MiB/s (120MB/s)(229MiB/2006msec) 00:26:49.712 slat (usec): min=2, max=165, avg= 3.95, stdev= 3.40 00:26:49.712 clat (usec): min=2264, max=23661, avg=10409.79, stdev=2607.62 00:26:49.712 lat (usec): min=2267, max=23664, avg=10413.74, stdev=2607.73 00:26:49.712 clat percentiles (usec): 00:26:49.712 | 1.00th=[ 5276], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8029], 00:26:49.712 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:26:49.712 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13566], 95.00th=[14484], 00:26:49.712 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:26:49.712 | 99.99th=[19792] 00:26:49.712 bw ( KiB/s): min=45760, max=70400, per=50.60%, avg=59168.00, stdev=11299.16, samples=4 00:26:49.712 iops : min= 2860, max= 4400, avg=3698.00, stdev=706.20, samples=4 00:26:49.712 write: IOPS=4396, BW=68.7MiB/s (72.0MB/s)(121MiB/1767msec); 0 zone resets 00:26:49.712 slat (usec): min=29, max=420, avg=39.92, stdev=11.65 00:26:49.712 clat (usec): min=3853, max=20061, avg=12418.99, stdev=2393.00 00:26:49.712 lat (usec): min=3902, max=20094, avg=12458.91, stdev=2393.14 00:26:49.712 clat percentiles (usec): 00:26:49.712 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:26:49.712 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12780], 00:26:49.712 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15926], 95.00th=[16909], 00:26:49.712 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:26:49.712 | 99.99th=[20055] 00:26:49.712 bw ( KiB/s): min=48960, max=73024, per=87.47%, avg=61536.00, stdev=11194.51, samples=4 00:26:49.712 iops : min= 3060, max= 4564, avg=3846.00, stdev=699.66, samples=4 00:26:49.712 lat (msec) : 4=0.11%, 10=35.25%, 20=64.63%, 50=0.01% 00:26:49.712 cpu : usr=70.12%, sys=19.95%, ctx=7, majf=0, minf=11 00:26:49.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:49.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.712 issued rwts: total=14661,7769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.712 00:26:49.712 Run status group 0 (all jobs): 00:26:49.712 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=229MiB (240MB), run=2006-2006msec 00:26:49.712 WRITE: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=121MiB (127MB), run=1767-1767msec 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:49.712 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:26:50.292 Nvme0n1 00:26:50.292 18:47:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=8ab27601-0c75-4919-815e-56bc25e04daa 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 8ab27601-0c75-4919-815e-56bc25e04daa 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8ab27601-0c75-4919-815e-56bc25e04daa 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:26:50.593 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:50.852 { 00:26:50.852 "base_bdev": "Nvme0n1", 00:26:50.852 "block_size": 4096, 00:26:50.852 "cluster_size": 1073741824, 00:26:50.852 "free_clusters": 4, 00:26:50.852 "name": "lvs_0", 00:26:50.852 "total_data_clusters": 4, 00:26:50.852 "uuid": "8ab27601-0c75-4919-815e-56bc25e04daa" 00:26:50.852 } 00:26:50.852 ]' 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8ab27601-0c75-4919-815e-56bc25e04daa") .free_clusters' 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8ab27601-0c75-4919-815e-56bc25e04daa") .cluster_size' 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:26:50.852 4096 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:26:50.852 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:26:51.111 e22c392d-8d55-4f6b-8283-9effeb96452e 00:26:51.111 18:47:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:51.678 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:51.678 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:51.936 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:52.194 18:47:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:52.194 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:52.194 fio-3.35 00:26:52.194 Starting 1 thread 00:26:54.725 00:26:54.725 test: (groupid=0, jobs=1): err= 0: pid=108403: Sat Dec 14 18:47:10 2024 00:26:54.725 read: IOPS=5887, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec) 00:26:54.725 slat (usec): min=2, max=427, avg= 3.54, stdev= 5.58 00:26:54.725 clat (usec): min=4261, max=21074, avg=11489.26, stdev=1115.65 00:26:54.725 lat (usec): min=4270, max=21077, avg=11492.80, stdev=1115.45 00:26:54.725 clat percentiles (usec): 00:26:54.725 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:26:54.725 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:26:54.725 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13304], 00:26:54.725 | 99.00th=[14222], 99.50th=[14615], 99.90th=[19792], 99.95th=[20055], 00:26:54.725 | 99.99th=[21103] 00:26:54.725 bw ( KiB/s): min=23112, max=24528, per=99.88%, avg=23522.00, stdev=673.88, samples=4 00:26:54.725 iops : min= 5778, max= 6132, avg=5880.50, stdev=168.47, samples=4 00:26:54.725 write: IOPS=5880, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2009msec); 0 zone resets 00:26:54.725 slat (usec): min=2, max=262, avg= 3.54, stdev= 3.65 00:26:54.725 clat (usec): min=2585, max=19959, avg=10203.28, stdev=989.96 00:26:54.725 lat (usec): min=2598, max=19962, avg=10206.83, stdev=989.91 00:26:54.725 clat percentiles (usec): 00:26:54.725 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:26:54.725 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:26:54.725 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:26:54.725 | 99.00th=[12518], 99.50th=[12911], 99.90th=[17171], 99.95th=[17433], 00:26:54.725 | 99.99th=[19792] 00:26:54.725 bw ( KiB/s): min=22656, max=24392, per=99.94%, avg=23508.00, stdev=821.38, samples=4 00:26:54.725 iops : min= 5664, max= 6098, avg=5877.00, stdev=205.35, samples=4 00:26:54.725 lat (msec) : 4=0.04%, 10=24.30%, 20=75.64%, 50=0.03% 00:26:54.725 cpu : usr=67.38%, sys=24.60%, ctx=4, majf=0, minf=17 00:26:54.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:26:54.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:54.725 issued rwts: total=11828,11814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:54.725 00:26:54.725 Run status group 0 (all jobs): 00:26:54.725 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2009-2009msec 00:26:54.725 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.4MB), run=2009-2009msec 00:26:54.725 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:54.984 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c77cd8a8-275c-4aae-a253-9a255c307146 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c77cd8a8-275c-4aae-a253-9a255c307146 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c77cd8a8-275c-4aae-a253-9a255c307146 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:26:55.243 18:47:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:55.501 { 00:26:55.501 "base_bdev": "Nvme0n1", 00:26:55.501 "block_size": 4096, 00:26:55.501 "cluster_size": 1073741824, 00:26:55.501 "free_clusters": 0, 00:26:55.501 "name": "lvs_0", 00:26:55.501 "total_data_clusters": 4, 00:26:55.501 "uuid": "8ab27601-0c75-4919-815e-56bc25e04daa" 00:26:55.501 }, 00:26:55.501 { 00:26:55.501 "base_bdev": "e22c392d-8d55-4f6b-8283-9effeb96452e", 00:26:55.501 "block_size": 4096, 00:26:55.501 "cluster_size": 4194304, 00:26:55.501 "free_clusters": 1022, 00:26:55.501 "name": "lvs_n_0", 00:26:55.501 "total_data_clusters": 1022, 00:26:55.501 "uuid": "c77cd8a8-275c-4aae-a253-9a255c307146" 00:26:55.501 } 00:26:55.501 ]' 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c77cd8a8-275c-4aae-a253-9a255c307146") .free_clusters' 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c77cd8a8-275c-4aae-a253-9a255c307146") .cluster_size' 00:26:55.501 4088 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:26:55.501 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:26:56.068 6e44b65f-5db2-4682-b120-208e485f5215 00:26:56.068 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:56.068 18:47:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:56.634 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:56.893 18:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:56.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:56.893 fio-3.35 00:26:56.893 Starting 1 thread 00:26:59.426 00:26:59.426 test: (groupid=0, jobs=1): err= 0: pid=108524: Sat Dec 14 18:47:14 2024 00:26:59.426 read: IOPS=5242, BW=20.5MiB/s (21.5MB/s)(41.2MiB/2010msec) 00:26:59.426 slat (nsec): min=1857, max=419854, avg=2853.56, stdev=5239.63 00:26:59.426 clat (usec): min=4843, max=22208, avg=12864.06, stdev=1267.99 00:26:59.426 lat (usec): min=4861, max=22210, avg=12866.91, stdev=1267.71 00:26:59.426 clat percentiles (usec): 00:26:59.426 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11338], 20.00th=[11863], 00:26:59.426 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:26:59.426 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14353], 95.00th=[14877], 00:26:59.426 | 99.00th=[15795], 99.50th=[16319], 99.90th=[21627], 99.95th=[21890], 00:26:59.426 | 99.99th=[22152] 00:26:59.426 bw ( KiB/s): min=20192, max=22440, per=99.80%, avg=20930.00, stdev=1020.33, samples=4 00:26:59.426 iops : min= 5048, max= 5610, avg=5232.50, stdev=255.08, samples=4 00:26:59.426 write: IOPS=5237, BW=20.5MiB/s (21.5MB/s)(41.1MiB/2010msec); 0 zone resets 00:26:59.426 slat (nsec): min=1958, max=300292, avg=2882.27, stdev=3568.85 00:26:59.426 clat (usec): min=2719, max=21767, avg=11474.23, stdev=1117.96 00:26:59.426 lat (usec): min=2733, max=21769, avg=11477.11, stdev=1117.83 00:26:59.426 clat percentiles (usec): 00:26:59.426 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:26:59.426 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:26:59.426 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:26:59.426 | 99.00th=[13960], 99.50th=[14484], 99.90th=[19792], 99.95th=[20317], 00:26:59.426 | 99.99th=[21627] 00:26:59.426 bw ( KiB/s): min=20288, max=21824, per=99.98%, avg=20946.00, stdev=682.78, samples=4 00:26:59.426 iops : min= 5072, max= 5456, avg=5236.50, stdev=170.70, samples=4 00:26:59.426 lat (msec) : 4=0.03%, 10=3.85%, 20=95.97%, 50=0.14% 00:26:59.426 cpu : usr=70.43%, sys=23.05%, ctx=5, majf=0, minf=17 00:26:59.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:59.426 issued rwts: total=10538,10527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:59.426 00:26:59.426 Run status group 0 (all jobs): 00:26:59.426 READ: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=41.2MiB (43.2MB), run=2010-2010msec 00:26:59.426 WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=41.1MiB (43.1MB), run=2010-2010msec 00:26:59.426 18:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:59.683 18:47:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:26:59.684 18:47:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:26:59.942 18:47:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:00.201 18:47:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:00.768 18:47:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:00.768 18:47:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.705 rmmod nvme_tcp 00:27:01.705 rmmod nvme_fabrics 00:27:01.705 rmmod nvme_keyring 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 108070 ']' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 108070 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 108070 ']' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 108070 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108070 00:27:01.705 killing process with pid 108070 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108070' 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 108070 00:27:01.705 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 108070 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:01.964 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.222 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:27:02.222 00:27:02.223 real 0m21.464s 00:27:02.223 user 1m32.727s 00:27:02.223 sys 0m5.084s 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.223 ************************************ 00:27:02.223 END TEST nvmf_fio_host 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.223 ************************************ 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.223 ************************************ 00:27:02.223 START TEST nvmf_failover 00:27:02.223 ************************************ 00:27:02.223 18:47:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:02.482 * Looking for test storage... 00:27:02.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:02.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.482 --rc genhtml_branch_coverage=1 00:27:02.482 --rc genhtml_function_coverage=1 00:27:02.482 --rc genhtml_legend=1 00:27:02.482 --rc geninfo_all_blocks=1 00:27:02.482 --rc geninfo_unexecuted_blocks=1 00:27:02.482 00:27:02.482 ' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:02.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.482 --rc genhtml_branch_coverage=1 00:27:02.482 --rc genhtml_function_coverage=1 00:27:02.482 --rc genhtml_legend=1 00:27:02.482 --rc geninfo_all_blocks=1 00:27:02.482 --rc geninfo_unexecuted_blocks=1 00:27:02.482 00:27:02.482 ' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:02.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.482 --rc genhtml_branch_coverage=1 00:27:02.482 --rc genhtml_function_coverage=1 00:27:02.482 --rc genhtml_legend=1 00:27:02.482 --rc geninfo_all_blocks=1 00:27:02.482 --rc geninfo_unexecuted_blocks=1 00:27:02.482 00:27:02.482 ' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:02.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.482 --rc genhtml_branch_coverage=1 00:27:02.482 --rc genhtml_function_coverage=1 00:27:02.482 --rc genhtml_legend=1 00:27:02.482 --rc geninfo_all_blocks=1 00:27:02.482 --rc geninfo_unexecuted_blocks=1 00:27:02.482 00:27:02.482 ' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.482 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:02.483 Cannot find device "nvmf_init_br" 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:02.483 Cannot find device "nvmf_init_br2" 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:02.483 Cannot find device "nvmf_tgt_br" 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:02.483 Cannot find device "nvmf_tgt_br2" 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:02.483 Cannot find device "nvmf_init_br" 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:27:02.483 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:02.742 Cannot find device "nvmf_init_br2" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:02.742 Cannot find device "nvmf_tgt_br" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:02.742 Cannot find device "nvmf_tgt_br2" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:02.742 Cannot find device "nvmf_br" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:02.742 Cannot find device "nvmf_init_if" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:02.742 Cannot find device "nvmf_init_if2" 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:02.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:02.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:02.742 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:03.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:03.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:27:03.001 00:27:03.001 --- 10.0.0.3 ping statistics --- 00:27:03.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.001 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:03.001 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:03.001 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:27:03.001 00:27:03.001 --- 10.0.0.4 ping statistics --- 00:27:03.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.001 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:03.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:27:03.001 00:27:03.001 --- 10.0.0.1 ping statistics --- 00:27:03.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.001 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:03.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:27:03.001 00:27:03.001 --- 10.0.0.2 ping statistics --- 00:27:03.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.001 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=108854 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 108854 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 108854 ']' 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.001 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.002 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.002 18:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:03.002 [2024-12-14 18:47:18.658941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:03.002 [2024-12-14 18:47:18.659029] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.260 [2024-12-14 18:47:18.804456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:03.260 [2024-12-14 18:47:18.893278] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.260 [2024-12-14 18:47:18.893361] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.260 [2024-12-14 18:47:18.893376] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.260 [2024-12-14 18:47:18.893387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.260 [2024-12-14 18:47:18.893397] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.260 [2024-12-14 18:47:18.893740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.260 [2024-12-14 18:47:18.894020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.260 [2024-12-14 18:47:18.894026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.519 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:03.777 [2024-12-14 18:47:19.410862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.777 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:04.036 Malloc0 00:27:04.036 18:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.602 18:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.861 18:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:05.119 [2024-12-14 18:47:20.629709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:05.119 18:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:05.378 [2024-12-14 18:47:20.929923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:05.378 18:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:05.636 [2024-12-14 18:47:21.218213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=108952 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 108952 /var/tmp/bdevperf.sock 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 108952 ']' 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:05.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.636 18:47:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:06.571 18:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.571 18:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:06.571 18:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.141 NVMe0n1 00:27:07.141 18:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.399 00:27:07.399 18:47:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=109000 00:27:07.400 18:47:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:07.400 18:47:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:08.334 18:47:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:08.592 [2024-12-14 18:47:24.301760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 [2024-12-14 18:47:24.301964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444e90 is same with the state(6) to be set 00:27:08.592 18:47:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:11.876 18:47:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:12.135 00:27:12.135 18:47:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:12.395 [2024-12-14 18:47:28.093022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.395 [2024-12-14 18:47:28.093437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.093990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.396 [2024-12-14 18:47:28.094183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 [2024-12-14 18:47:28.094279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445c40 is same with the state(6) to be set 00:27:12.397 18:47:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:15.681 18:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:15.681 [2024-12-14 18:47:31.418717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:15.940 18:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:16.875 18:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:17.134 18:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 109000 00:27:23.703 { 00:27:23.703 "results": [ 00:27:23.703 { 00:27:23.703 "job": "NVMe0n1", 00:27:23.703 "core_mask": "0x1", 00:27:23.703 "workload": "verify", 00:27:23.703 "status": "finished", 00:27:23.703 "verify_range": { 00:27:23.703 "start": 0, 00:27:23.703 "length": 16384 00:27:23.703 }, 00:27:23.703 "queue_depth": 128, 00:27:23.703 "io_size": 4096, 00:27:23.703 "runtime": 15.005842, 00:27:23.703 "iops": 9778.19172026468, 00:27:23.703 "mibps": 38.196061407283906, 00:27:23.703 "io_failed": 3325, 00:27:23.703 "io_timeout": 0, 00:27:23.703 "avg_latency_us": 12772.267675040364, 00:27:23.703 "min_latency_us": 774.5163636363636, 00:27:23.703 "max_latency_us": 26691.025454545455 00:27:23.703 } 00:27:23.703 ], 00:27:23.703 "core_count": 1 00:27:23.703 } 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 108952 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 108952 ']' 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 108952 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108952 00:27:23.703 killing process with pid 108952 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108952' 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 108952 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 108952 00:27:23.703 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:23.703 [2024-12-14 18:47:21.295722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:23.703 [2024-12-14 18:47:21.295892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108952 ] 00:27:23.703 [2024-12-14 18:47:21.438392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.703 [2024-12-14 18:47:21.538658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.703 Running I/O for 15 seconds... 00:27:23.703 9307.00 IOPS, 36.36 MiB/s [2024-12-14T18:47:39.456Z] [2024-12-14 18:47:24.302587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.703 [2024-12-14 18:47:24.302703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.703 [2024-12-14 18:47:24.302734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.703 [2024-12-14 18:47:24.302751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.703 [2024-12-14 18:47:24.302768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.703 [2024-12-14 18:47:24.302790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.302819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.302849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.302894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.302952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.302982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.303012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.303041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.303070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.704 [2024-12-14 18:47:24.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.303935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.303966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.704 [2024-12-14 18:47:24.304232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.704 [2024-12-14 18:47:24.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.304958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.304973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.705 [2024-12-14 18:47:24.305383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.705 [2024-12-14 18:47:24.305596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.705 [2024-12-14 18:47:24.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.305958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.305972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.706 [2024-12-14 18:47:24.306800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.706 [2024-12-14 18:47:24.306942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.706 [2024-12-14 18:47:24.306959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:24.306981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.306998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:24.307012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:24.307041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4000 is same with the state(6) to be set 00:27:23.707 [2024-12-14 18:47:24.307074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.707 [2024-12-14 18:47:24.307085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.707 [2024-12-14 18:47:24.307101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:27:23.707 [2024-12-14 18:47:24.307120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307204] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a4000 was disconnected and freed. reset controller. 00:27:23.707 [2024-12-14 18:47:24.307222] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:23.707 [2024-12-14 18:47:24.307291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.707 [2024-12-14 18:47:24.307312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.707 [2024-12-14 18:47:24.307340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.707 [2024-12-14 18:47:24.307367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.707 [2024-12-14 18:47:24.307393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:24.307406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.707 [2024-12-14 18:47:24.310878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.707 [2024-12-14 18:47:24.310940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1683ce0 (9): Bad file descriptor 00:27:23.707 [2024-12-14 18:47:24.347396] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:23.707 9409.50 IOPS, 36.76 MiB/s [2024-12-14T18:47:39.460Z] 9589.33 IOPS, 37.46 MiB/s [2024-12-14T18:47:39.460Z] 9663.00 IOPS, 37.75 MiB/s [2024-12-14T18:47:39.460Z] [2024-12-14 18:47:28.097348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.707 [2024-12-14 18:47:28.097755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.097978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.097995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.707 [2024-12-14 18:47:28.098316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.707 [2024-12-14 18:47:28.098331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.098972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.098988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.708 [2024-12-14 18:47:28.099496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.708 [2024-12-14 18:47:28.099510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.709 [2024-12-14 18:47:28.099937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.099974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.099991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129992 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130000 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130008 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130016 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130024 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130032 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130040 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130048 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130056 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130064 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130072 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130080 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130088 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130096 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130104 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.709 [2024-12-14 18:47:28.100848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130112 len:8 PRP1 0x0 PRP2 0x0 00:27:23.709 [2024-12-14 18:47:28.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.709 [2024-12-14 18:47:28.100874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.709 [2024-12-14 18:47:28.100883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.100894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130120 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.100914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.100940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.100950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.100960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130128 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.100979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.100993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130136 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130144 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130152 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130160 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130168 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130176 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130184 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130192 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130200 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130208 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130216 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130224 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130232 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130240 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130248 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130256 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130264 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.101960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.101970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.101980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130272 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.101993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.102006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.102016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.102026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130280 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.102053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.102066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.102075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.102095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130288 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.102108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.102121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.102131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.102141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130296 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.102154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.102178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.102188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.710 [2024-12-14 18:47:28.102207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130304 len:8 PRP1 0x0 PRP2 0x0 00:27:23.710 [2024-12-14 18:47:28.102236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.710 [2024-12-14 18:47:28.102249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.710 [2024-12-14 18:47:28.102270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130312 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.102293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.102305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.102315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.102334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130320 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130328 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130336 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130344 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130352 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130360 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130368 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130376 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130384 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130392 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.111943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.111956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130400 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.111973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.111990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.711 [2024-12-14 18:47:28.112003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.711 [2024-12-14 18:47:28.112016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130408 len:8 PRP1 0x0 PRP2 0x0 00:27:23.711 [2024-12-14 18:47:28.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.112127] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a5c10 was disconnected and freed. reset controller. 00:27:23.711 [2024-12-14 18:47:28.112176] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:27:23.711 [2024-12-14 18:47:28.112278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:28.112317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.112339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:28.112357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.112376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:28.112407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.112427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:28.112444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:28.112473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.711 [2024-12-14 18:47:28.112539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1683ce0 (9): Bad file descriptor 00:27:23.711 [2024-12-14 18:47:28.117731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.711 [2024-12-14 18:47:28.151609] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:23.711 9545.40 IOPS, 37.29 MiB/s [2024-12-14T18:47:39.464Z] 9554.17 IOPS, 37.32 MiB/s [2024-12-14T18:47:39.464Z] 9573.00 IOPS, 37.39 MiB/s [2024-12-14T18:47:39.464Z] 9595.62 IOPS, 37.48 MiB/s [2024-12-14T18:47:39.464Z] 9592.56 IOPS, 37.47 MiB/s [2024-12-14T18:47:39.464Z] [2024-12-14 18:47:32.719480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:32.719576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.719622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:32.719638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.719653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:32.719667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.719680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.711 [2024-12-14 18:47:32.719693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.719707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683ce0 is same with the state(6) to be set 00:27:23.711 [2024-12-14 18:47:32.720576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.711 [2024-12-14 18:47:32.720599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.711 [2024-12-14 18:47:32.720676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.711 [2024-12-14 18:47:32.720877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.711 [2024-12-14 18:47:32.720890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.720904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.720918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.720932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.720946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.720960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.720989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.712 [2024-12-14 18:47:32.721822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.721944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.721981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.722055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.722110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.722149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.712 [2024-12-14 18:47:32.722202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.712 [2024-12-14 18:47:32.722218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.722965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.722994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.713 [2024-12-14 18:47:32.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.713 [2024-12-14 18:47:32.723585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.723979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.723993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.714 [2024-12-14 18:47:32.724763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.714 [2024-12-14 18:47:32.724906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.714 [2024-12-14 18:47:32.724921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.715 [2024-12-14 18:47:32.724934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.715 [2024-12-14 18:47:32.724949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.715 [2024-12-14 18:47:32.724962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.715 [2024-12-14 18:47:32.724976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b24a0 is same with the state(6) to be set 00:27:23.715 [2024-12-14 18:47:32.725000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:23.715 [2024-12-14 18:47:32.725012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:23.715 [2024-12-14 18:47:32.725023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99952 len:8 PRP1 0x0 PRP2 0x0 00:27:23.715 [2024-12-14 18:47:32.725037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.715 [2024-12-14 18:47:32.725116] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16b24a0 was disconnected and freed. reset controller. 00:27:23.715 [2024-12-14 18:47:32.725134] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:27:23.715 [2024-12-14 18:47:32.725153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.715 [2024-12-14 18:47:32.728797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.715 [2024-12-14 18:47:32.728861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1683ce0 (9): Bad file descriptor 00:27:23.715 [2024-12-14 18:47:32.764576] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:23.715 9612.30 IOPS, 37.55 MiB/s [2024-12-14T18:47:39.468Z] 9664.18 IOPS, 37.75 MiB/s [2024-12-14T18:47:39.468Z] 9728.58 IOPS, 38.00 MiB/s [2024-12-14T18:47:39.468Z] 9768.77 IOPS, 38.16 MiB/s [2024-12-14T18:47:39.468Z] 9785.93 IOPS, 38.23 MiB/s 00:27:23.715 Latency(us) 00:27:23.715 [2024-12-14T18:47:39.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.715 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:23.715 Verification LBA range: start 0x0 length 0x4000 00:27:23.715 NVMe0n1 : 15.01 9778.19 38.20 221.58 0.00 12772.27 774.52 26691.03 00:27:23.715 [2024-12-14T18:47:39.468Z] =================================================================================================================== 00:27:23.715 [2024-12-14T18:47:39.468Z] Total : 9778.19 38.20 221.58 0.00 12772.27 774.52 26691.03 00:27:23.715 Received shutdown signal, test time was about 15.000000 seconds 00:27:23.715 00:27:23.715 Latency(us) 00:27:23.715 [2024-12-14T18:47:39.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.715 [2024-12-14T18:47:39.468Z] =================================================================================================================== 00:27:23.715 [2024-12-14T18:47:39.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:23.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=109198 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 109198 /var/tmp/bdevperf.sock 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 109198 ']' 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.715 18:47:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:23.974 18:47:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.974 18:47:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:23.974 18:47:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:24.232 [2024-12-14 18:47:39.858448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:24.232 18:47:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:24.491 [2024-12-14 18:47:40.143001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:24.491 18:47:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:25.059 NVMe0n1 00:27:25.059 18:47:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:25.317 00:27:25.317 18:47:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:25.576 00:27:25.576 18:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:25.576 18:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:25.835 18:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:26.402 18:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:29.684 18:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:29.684 18:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:29.684 18:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=109342 00:27:29.684 18:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:29.684 18:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 109342 00:27:30.660 { 00:27:30.660 "results": [ 00:27:30.660 { 00:27:30.660 "job": "NVMe0n1", 00:27:30.660 "core_mask": "0x1", 00:27:30.660 "workload": "verify", 00:27:30.660 "status": "finished", 00:27:30.660 "verify_range": { 00:27:30.660 "start": 0, 00:27:30.660 "length": 16384 00:27:30.660 }, 00:27:30.660 "queue_depth": 128, 00:27:30.660 "io_size": 4096, 00:27:30.660 "runtime": 1.006396, 00:27:30.660 "iops": 10357.751819363351, 00:27:30.660 "mibps": 40.45996804438809, 00:27:30.660 "io_failed": 0, 00:27:30.660 "io_timeout": 0, 00:27:30.660 "avg_latency_us": 12297.774188236936, 00:27:30.660 "min_latency_us": 1720.32, 00:27:30.660 "max_latency_us": 14060.450909090909 00:27:30.660 } 00:27:30.660 ], 00:27:30.660 "core_count": 1 00:27:30.660 } 00:27:30.660 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:30.660 [2024-12-14 18:47:38.587828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:30.660 [2024-12-14 18:47:38.587946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109198 ] 00:27:30.660 [2024-12-14 18:47:38.732046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.660 [2024-12-14 18:47:38.827039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.660 [2024-12-14 18:47:41.843587] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:30.660 [2024-12-14 18:47:41.843719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.660 [2024-12-14 18:47:41.843745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.660 [2024-12-14 18:47:41.843763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.660 [2024-12-14 18:47:41.843777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.660 [2024-12-14 18:47:41.843791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.660 [2024-12-14 18:47:41.843804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.660 [2024-12-14 18:47:41.843819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.660 [2024-12-14 18:47:41.843832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.660 [2024-12-14 18:47:41.843846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.660 [2024-12-14 18:47:41.843896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.660 [2024-12-14 18:47:41.843924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ace0 (9): Bad file descriptor 00:27:30.660 [2024-12-14 18:47:41.853600] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:30.660 Running I/O for 1 seconds... 00:27:30.660 10296.00 IOPS, 40.22 MiB/s 00:27:30.660 Latency(us) 00:27:30.660 [2024-12-14T18:47:46.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.660 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:30.660 Verification LBA range: start 0x0 length 0x4000 00:27:30.660 NVMe0n1 : 1.01 10357.75 40.46 0.00 0.00 12297.77 1720.32 14060.45 00:27:30.660 [2024-12-14T18:47:46.413Z] =================================================================================================================== 00:27:30.660 [2024-12-14T18:47:46.413Z] Total : 10357.75 40.46 0.00 0.00 12297.77 1720.32 14060.45 00:27:30.660 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:30.660 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:30.919 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.178 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:31.178 18:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:31.437 18:47:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.696 18:47:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:34.984 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:34.984 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:35.243 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 109198 00:27:35.243 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 109198 ']' 00:27:35.243 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 109198 00:27:35.243 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109198 00:27:35.244 killing process with pid 109198 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109198' 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 109198 00:27:35.244 18:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 109198 00:27:35.503 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:35.503 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.762 rmmod nvme_tcp 00:27:35.762 rmmod nvme_fabrics 00:27:35.762 rmmod nvme_keyring 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 108854 ']' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 108854 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 108854 ']' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 108854 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108854 00:27:35.762 killing process with pid 108854 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108854' 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 108854 00:27:35.762 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 108854 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.329 18:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:27:36.329 00:27:36.329 real 0m34.098s 00:27:36.329 user 2m12.672s 00:27:36.329 sys 0m4.887s 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 ************************************ 00:27:36.329 END TEST nvmf_failover 00:27:36.329 ************************************ 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.329 18:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.589 ************************************ 00:27:36.589 START TEST nvmf_host_discovery 00:27:36.589 ************************************ 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:36.589 * Looking for test storage... 00:27:36.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:36.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.589 --rc genhtml_branch_coverage=1 00:27:36.589 --rc genhtml_function_coverage=1 00:27:36.589 --rc genhtml_legend=1 00:27:36.589 --rc geninfo_all_blocks=1 00:27:36.589 --rc geninfo_unexecuted_blocks=1 00:27:36.589 00:27:36.589 ' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:36.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.589 --rc genhtml_branch_coverage=1 00:27:36.589 --rc genhtml_function_coverage=1 00:27:36.589 --rc genhtml_legend=1 00:27:36.589 --rc geninfo_all_blocks=1 00:27:36.589 --rc geninfo_unexecuted_blocks=1 00:27:36.589 00:27:36.589 ' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:36.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.589 --rc genhtml_branch_coverage=1 00:27:36.589 --rc genhtml_function_coverage=1 00:27:36.589 --rc genhtml_legend=1 00:27:36.589 --rc geninfo_all_blocks=1 00:27:36.589 --rc geninfo_unexecuted_blocks=1 00:27:36.589 00:27:36.589 ' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:36.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.589 --rc genhtml_branch_coverage=1 00:27:36.589 --rc genhtml_function_coverage=1 00:27:36.589 --rc genhtml_legend=1 00:27:36.589 --rc geninfo_all_blocks=1 00:27:36.589 --rc geninfo_unexecuted_blocks=1 00:27:36.589 00:27:36.589 ' 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.589 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:36.590 Cannot find device "nvmf_init_br" 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:27:36.590 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:36.849 Cannot find device "nvmf_init_br2" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:36.849 Cannot find device "nvmf_tgt_br" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.849 Cannot find device "nvmf_tgt_br2" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:36.849 Cannot find device "nvmf_init_br" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:36.849 Cannot find device "nvmf_init_br2" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:36.849 Cannot find device "nvmf_tgt_br" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:36.849 Cannot find device "nvmf_tgt_br2" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:36.849 Cannot find device "nvmf_br" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:36.849 Cannot find device "nvmf_init_if" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:36.849 Cannot find device "nvmf_init_if2" 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:36.849 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:37.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:37.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:27:37.108 00:27:37.108 --- 10.0.0.3 ping statistics --- 00:27:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.108 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:37.108 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:37.108 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:27:37.108 00:27:37.108 --- 10.0.0.4 ping statistics --- 00:27:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.108 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:37.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:37.108 00:27:37.108 --- 10.0.0.1 ping statistics --- 00:27:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.108 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:37.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:27:37.108 00:27:37.108 --- 10.0.0.2 ping statistics --- 00:27:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.108 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=109698 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 109698 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 109698 ']' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.108 18:47:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.108 [2024-12-14 18:47:52.777210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:37.108 [2024-12-14 18:47:52.777288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.367 [2024-12-14 18:47:52.908658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.367 [2024-12-14 18:47:52.985017] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.367 [2024-12-14 18:47:52.985095] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.367 [2024-12-14 18:47:52.985105] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.367 [2024-12-14 18:47:52.985113] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.367 [2024-12-14 18:47:52.985119] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.367 [2024-12-14 18:47:52.985145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 [2024-12-14 18:47:53.190477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 [2024-12-14 18:47:53.202683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 null0 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 null1 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=109735 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 109735 /tmp/host.sock 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 109735 ']' 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.626 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.626 18:47:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.626 [2024-12-14 18:47:53.300395] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:37.626 [2024-12-14 18:47:53.300496] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109735 ] 00:27:37.885 [2024-12-14 18:47:53.443902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.885 [2024-12-14 18:47:53.533915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:38.820 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 [2024-12-14 18:47:54.698952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.079 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:27:39.338 18:47:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:39.595 [2024-12-14 18:47:55.345681] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:39.595 [2024-12-14 18:47:55.345711] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:39.595 [2024-12-14 18:47:55.345734] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:39.852 [2024-12-14 18:47:55.431779] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:27:39.852 [2024-12-14 18:47:55.488558] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:39.852 [2024-12-14 18:47:55.488582] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:40.419 18:47:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.678 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:40.678 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:40.678 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:40.678 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 [2024-12-14 18:47:56.303400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:40.679 [2024-12-14 18:47:56.303693] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:40.679 [2024-12-14 18:47:56.303715] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:40.679 [2024-12-14 18:47:56.389752] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:27:40.679 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:40.937 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:40.938 [2024-12-14 18:47:56.449093] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:40.938 [2024-12-14 18:47:56.449110] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:40.938 [2024-12-14 18:47:56.449116] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:40.938 18:47:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.872 [2024-12-14 18:47:57.615599] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:41.872 [2024-12-14 18:47:57.615651] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:41.872 [2024-12-14 18:47:57.616315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.872 [2024-12-14 18:47:57.616347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.872 [2024-12-14 18:47:57.616359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.872 [2024-12-14 18:47:57.616368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.872 [2024-12-14 18:47:57.616376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.872 [2024-12-14 18:47:57.616384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.872 [2024-12-14 18:47:57.616393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.872 [2024-12-14 18:47:57.616401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.872 [2024-12-14 18:47:57.616409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:41.872 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.132 [2024-12-14 18:47:57.626268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.132 [2024-12-14 18:47:57.636286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.636573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 [2024-12-14 18:47:57.636752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.636914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.637069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.637100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.637112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.637125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 [2024-12-14 18:47:57.637142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 [2024-12-14 18:47:57.646528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.646621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 [2024-12-14 18:47:57.646641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.646652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.646667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.646681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.646689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.646698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 [2024-12-14 18:47:57.646711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 [2024-12-14 18:47:57.656578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.656682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 [2024-12-14 18:47:57.656701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.656712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.656741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.656755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.656763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.656771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 [2024-12-14 18:47:57.656792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 [2024-12-14 18:47:57.666651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.666731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 [2024-12-14 18:47:57.666750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.666760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.666775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.666787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.666796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.666805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 [2024-12-14 18:47:57.666818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 [2024-12-14 18:47:57.676701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.676773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 [2024-12-14 18:47:57.676792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.676801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.676815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.676828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.676837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.676845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 [2024-12-14 18:47:57.676858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:42.132 [2024-12-14 18:47:57.686745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.132 r 00:27:42.132 [2024-12-14 18:47:57.686978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.132 [2024-12-14 18:47:57.687000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.132 [2024-12-14 18:47:57.687012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.132 [2024-12-14 18:47:57.687028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.132 [2024-12-14 18:47:57.687042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.132 [2024-12-14 18:47:57.687051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.132 [2024-12-14 18:47:57.687060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.132 [2024-12-14 18:47:57.687074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 [2024-12-14 18:47:57.696940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:42.132 [2024-12-14 18:47:57.697033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.133 [2024-12-14 18:47:57.697051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88fb0 with addr=10.0.0.3, port=4420 00:27:42.133 [2024-12-14 18:47:57.697060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88fb0 is same with the state(6) to be set 00:27:42.133 [2024-12-14 18:47:57.697074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88fb0 (9): Bad file descriptor 00:27:42.133 [2024-12-14 18:47:57.697086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:42.133 [2024-12-14 18:47:57.697094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:42.133 [2024-12-14 18:47:57.697102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:42.133 [2024-12-14 18:47:57.697114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.133 [2024-12-14 18:47:57.701191] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:27:42.133 [2024-12-14 18:47:57.701215] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.133 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.392 18:47:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.392 18:47:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 [2024-12-14 18:47:59.089712] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:43.767 [2024-12-14 18:47:59.089733] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:43.767 [2024-12-14 18:47:59.089765] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:43.767 [2024-12-14 18:47:59.175788] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:27:43.767 [2024-12-14 18:47:59.236210] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:43.767 [2024-12-14 18:47:59.236261] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 2024/12/14 18:47:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:43.767 request: 00:27:43.767 { 00:27:43.767 "method": "bdev_nvme_start_discovery", 00:27:43.767 "params": { 00:27:43.767 "name": "nvme", 00:27:43.767 "trtype": "tcp", 00:27:43.767 "traddr": "10.0.0.3", 00:27:43.767 "adrfam": "ipv4", 00:27:43.767 "trsvcid": "8009", 00:27:43.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:43.767 "wait_for_attach": true 00:27:43.767 } 00:27:43.767 } 00:27:43.767 Got JSON-RPC error response 00:27:43.767 GoRPCClient: error on JSON-RPC call 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 2024/12/14 18:47:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:43.767 request: 00:27:43.767 { 00:27:43.767 "method": "bdev_nvme_start_discovery", 00:27:43.767 "params": { 00:27:43.767 "name": "nvme_second", 00:27:43.767 "trtype": "tcp", 00:27:43.767 "traddr": "10.0.0.3", 00:27:43.767 "adrfam": "ipv4", 00:27:43.767 "trsvcid": "8009", 00:27:43.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:43.767 "wait_for_attach": true 00:27:43.767 } 00:27:43.767 } 00:27:43.767 Got JSON-RPC error response 00:27:43.767 GoRPCClient: error on JSON-RPC call 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.767 18:47:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.141 [2024-12-14 18:48:00.497084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-12-14 18:48:00.497308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbc3b0 with addr=10.0.0.3, port=8010 00:27:45.141 [2024-12-14 18:48:00.497347] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:45.141 [2024-12-14 18:48:00.497358] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:45.141 [2024-12-14 18:48:00.497368] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:27:46.076 [2024-12-14 18:48:01.496992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.077 [2024-12-14 18:48:01.497030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbc3b0 with addr=10.0.0.3, port=8010 00:27:46.077 [2024-12-14 18:48:01.497049] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:46.077 [2024-12-14 18:48:01.497057] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:46.077 [2024-12-14 18:48:01.497065] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:27:47.011 [2024-12-14 18:48:02.496924] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:27:47.011 2024/12/14 18:48:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:27:47.011 request: 00:27:47.011 { 00:27:47.011 "method": "bdev_nvme_start_discovery", 00:27:47.011 "params": { 00:27:47.011 "name": "nvme_second", 00:27:47.011 "trtype": "tcp", 00:27:47.011 "traddr": "10.0.0.3", 00:27:47.011 "adrfam": "ipv4", 00:27:47.011 "trsvcid": "8010", 00:27:47.011 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:47.011 "wait_for_attach": false, 00:27:47.011 "attach_timeout_ms": 3000 00:27:47.011 } 00:27:47.011 } 00:27:47.011 Got JSON-RPC error response 00:27:47.011 GoRPCClient: error on JSON-RPC call 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 109735 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.011 rmmod nvme_tcp 00:27:47.011 rmmod nvme_fabrics 00:27:47.011 rmmod nvme_keyring 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 109698 ']' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 109698 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 109698 ']' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 109698 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109698 00:27:47.011 killing process with pid 109698 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109698' 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 109698 00:27:47.011 18:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 109698 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:27:47.578 00:27:47.578 real 0m11.189s 00:27:47.578 user 0m21.834s 00:27:47.578 sys 0m1.892s 00:27:47.578 ************************************ 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.578 END TEST nvmf_host_discovery 00:27:47.578 ************************************ 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.578 18:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.838 ************************************ 00:27:47.838 START TEST nvmf_host_multipath_status 00:27:47.838 ************************************ 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:47.838 * Looking for test storage... 00:27:47.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:47.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.838 --rc genhtml_branch_coverage=1 00:27:47.838 --rc genhtml_function_coverage=1 00:27:47.838 --rc genhtml_legend=1 00:27:47.838 --rc geninfo_all_blocks=1 00:27:47.838 --rc geninfo_unexecuted_blocks=1 00:27:47.838 00:27:47.838 ' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:47.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.838 --rc genhtml_branch_coverage=1 00:27:47.838 --rc genhtml_function_coverage=1 00:27:47.838 --rc genhtml_legend=1 00:27:47.838 --rc geninfo_all_blocks=1 00:27:47.838 --rc geninfo_unexecuted_blocks=1 00:27:47.838 00:27:47.838 ' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:47.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.838 --rc genhtml_branch_coverage=1 00:27:47.838 --rc genhtml_function_coverage=1 00:27:47.838 --rc genhtml_legend=1 00:27:47.838 --rc geninfo_all_blocks=1 00:27:47.838 --rc geninfo_unexecuted_blocks=1 00:27:47.838 00:27:47.838 ' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:47.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.838 --rc genhtml_branch_coverage=1 00:27:47.838 --rc genhtml_function_coverage=1 00:27:47.838 --rc genhtml_legend=1 00:27:47.838 --rc geninfo_all_blocks=1 00:27:47.838 --rc geninfo_unexecuted_blocks=1 00:27:47.838 00:27:47.838 ' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.838 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:47.839 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:47.839 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:47.839 Cannot find device "nvmf_init_br" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:48.098 Cannot find device "nvmf_init_br2" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:48.098 Cannot find device "nvmf_tgt_br" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.098 Cannot find device "nvmf_tgt_br2" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:48.098 Cannot find device "nvmf_init_br" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:48.098 Cannot find device "nvmf_init_br2" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:48.098 Cannot find device "nvmf_tgt_br" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:48.098 Cannot find device "nvmf_tgt_br2" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:48.098 Cannot find device "nvmf_br" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:48.098 Cannot find device "nvmf_init_if" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:48.098 Cannot find device "nvmf_init_if2" 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:48.098 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:48.099 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:48.099 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:48.099 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:48.099 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:48.099 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:48.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:48.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.293 ms 00:27:48.358 00:27:48.358 --- 10.0.0.3 ping statistics --- 00:27:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.358 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:48.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:48.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:27:48.358 00:27:48.358 --- 10.0.0.4 ping statistics --- 00:27:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.358 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:48.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:27:48.358 00:27:48.358 --- 10.0.0.1 ping statistics --- 00:27:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.358 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:48.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:27:48.358 00:27:48.358 --- 10.0.0.2 ping statistics --- 00:27:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.358 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=110282 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 110282 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 110282 ']' 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:48.358 18:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:48.358 [2024-12-14 18:48:04.058929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:48.358 [2024-12-14 18:48:04.059026] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.617 [2024-12-14 18:48:04.200733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:48.617 [2024-12-14 18:48:04.271269] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.617 [2024-12-14 18:48:04.271665] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.617 [2024-12-14 18:48:04.271812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.617 [2024-12-14 18:48:04.271860] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.617 [2024-12-14 18:48:04.271957] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.617 [2024-12-14 18:48:04.272134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.617 [2024-12-14 18:48:04.272143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=110282 00:27:48.876 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:49.134 [2024-12-14 18:48:04.733564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.134 18:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:49.394 Malloc0 00:27:49.394 18:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:49.662 18:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.934 18:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:50.501 [2024-12-14 18:48:06.001588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:50.501 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:50.760 [2024-12-14 18:48:06.281663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=110368 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 110368 /var/tmp/bdevperf.sock 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 110368 ']' 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.760 18:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:51.696 18:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.696 18:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:51.696 18:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:51.955 18:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:52.523 Nvme0n1 00:27:52.523 18:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:52.781 Nvme0n1 00:27:52.781 18:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:52.781 18:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:55.314 18:48:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:55.314 18:48:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:55.314 18:48:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:55.314 18:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.689 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:56.948 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.948 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:56.948 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.948 18:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:57.515 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.515 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:57.515 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.515 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:57.774 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.774 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:57.774 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.774 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.032 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.032 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:58.032 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.032 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:58.291 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.291 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:58.291 18:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:58.549 18:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:58.808 18:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:59.806 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:59.806 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:59.806 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.806 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.374 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:00.374 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:00.374 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.374 18:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:00.632 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.632 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:00.632 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.632 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:00.893 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.893 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:00.893 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.893 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.152 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.152 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.152 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.152 18:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.411 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.411 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:01.411 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.411 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.670 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.670 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:01.670 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:01.929 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:28:02.497 18:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:03.433 18:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:03.433 18:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:03.433 18:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.433 18:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.691 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.691 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:03.691 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.691 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.950 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.950 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.950 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:03.950 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.208 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.209 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:04.209 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.209 18:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.468 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.468 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.468 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.468 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:05.035 18:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:05.293 18:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:05.860 18:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:06.795 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:06.795 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:06.795 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.795 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:07.053 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.053 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:07.053 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.054 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:07.313 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:07.313 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:07.313 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.313 18:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:07.572 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.572 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:07.572 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.572 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:07.831 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.831 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:07.831 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.831 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:08.090 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.090 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:08.349 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.349 18:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:08.608 18:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.608 18:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:08.608 18:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:08.866 18:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:09.125 18:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:10.060 18:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:10.060 18:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:10.060 18:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.060 18:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:10.318 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.318 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:10.318 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.318 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:10.577 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.577 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:10.577 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.577 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:11.144 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.144 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:11.144 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.144 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:11.403 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.403 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:11.403 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:11.403 18:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.666 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.666 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:11.666 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:11.666 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.943 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.943 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:11.943 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:12.211 18:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:12.473 18:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:13.850 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:13.850 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:13.850 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.851 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:13.851 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:13.851 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:13.851 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.851 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:14.109 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.110 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:14.110 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:14.110 18:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.368 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.368 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:14.368 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.368 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:14.936 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.936 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:14.936 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.936 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:15.195 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:15.195 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:15.195 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.195 18:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:15.454 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.454 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:15.713 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:15.713 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:15.972 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:16.230 18:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:17.606 18:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:17.606 18:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:17.606 18:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.606 18:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:17.607 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.607 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:17.607 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.607 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:17.865 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.865 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:17.865 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.865 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:18.124 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.124 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:18.124 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.124 18:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:18.383 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.383 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:18.383 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.383 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:18.641 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.641 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:18.642 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.642 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:18.900 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.900 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:18.900 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:19.468 18:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:19.468 18:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:20.843 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:20.844 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.103 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.103 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.103 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.103 18:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.361 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.362 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.362 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:21.362 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.620 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.620 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:21.620 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.620 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:21.879 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.879 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:21.879 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.879 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:22.138 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.138 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:22.138 18:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:22.396 18:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:28:22.655 18:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.031 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:24.290 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.290 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:24.290 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:24.290 18:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.549 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.549 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:24.549 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:24.549 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.808 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.808 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:24.808 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:24.808 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.067 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.067 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:25.067 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.067 18:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:25.634 18:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.634 18:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:25.634 18:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:25.893 18:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:26.152 18:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:27.088 18:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:27.088 18:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:27.088 18:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.088 18:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:27.353 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.353 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:27.353 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:27.353 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.612 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:27.612 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:27.612 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.612 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:28.180 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.180 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:28.180 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.180 18:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:28.438 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.438 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:28.438 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.438 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:28.697 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.697 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:28.697 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.697 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 110368 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 110368 ']' 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 110368 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110368 00:28:28.956 killing process with pid 110368 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110368' 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 110368 00:28:28.956 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 110368 00:28:28.956 { 00:28:28.956 "results": [ 00:28:28.956 { 00:28:28.956 "job": "Nvme0n1", 00:28:28.956 "core_mask": "0x4", 00:28:28.956 "workload": "verify", 00:28:28.956 "status": "terminated", 00:28:28.956 "verify_range": { 00:28:28.956 "start": 0, 00:28:28.956 "length": 16384 00:28:28.956 }, 00:28:28.956 "queue_depth": 128, 00:28:28.956 "io_size": 4096, 00:28:28.956 "runtime": 36.044163, 00:28:28.956 "iops": 9424.299851268568, 00:28:28.956 "mibps": 36.813671294017844, 00:28:28.956 "io_failed": 0, 00:28:28.956 "io_timeout": 0, 00:28:28.956 "avg_latency_us": 13556.000851725941, 00:28:28.956 "min_latency_us": 148.01454545454544, 00:28:28.956 "max_latency_us": 4026531.84 00:28:28.956 } 00:28:28.956 ], 00:28:28.956 "core_count": 1 00:28:28.956 } 00:28:29.226 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 110368 00:28:29.226 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:29.226 [2024-12-14 18:48:06.356966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:29.226 [2024-12-14 18:48:06.357071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110368 ] 00:28:29.226 [2024-12-14 18:48:06.492005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.226 [2024-12-14 18:48:06.584977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.226 [2024-12-14 18:48:08.395405] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:28:29.226 Running I/O for 90 seconds... 00:28:29.226 10557.00 IOPS, 41.24 MiB/s [2024-12-14T18:48:44.979Z] 10740.50 IOPS, 41.96 MiB/s [2024-12-14T18:48:44.979Z] 10878.33 IOPS, 42.49 MiB/s [2024-12-14T18:48:44.979Z] 10930.00 IOPS, 42.70 MiB/s [2024-12-14T18:48:44.979Z] 10874.80 IOPS, 42.48 MiB/s [2024-12-14T18:48:44.979Z] 10881.50 IOPS, 42.51 MiB/s [2024-12-14T18:48:44.979Z] 10889.57 IOPS, 42.54 MiB/s [2024-12-14T18:48:44.979Z] 10883.62 IOPS, 42.51 MiB/s [2024-12-14T18:48:44.979Z] 10875.56 IOPS, 42.48 MiB/s [2024-12-14T18:48:44.979Z] 10894.10 IOPS, 42.56 MiB/s [2024-12-14T18:48:44.979Z] 10907.73 IOPS, 42.61 MiB/s [2024-12-14T18:48:44.979Z] 10901.25 IOPS, 42.58 MiB/s [2024-12-14T18:48:44.979Z] 10909.92 IOPS, 42.62 MiB/s [2024-12-14T18:48:44.979Z] 10919.71 IOPS, 42.66 MiB/s [2024-12-14T18:48:44.979Z] 10925.80 IOPS, 42.68 MiB/s [2024-12-14T18:48:44.979Z] [2024-12-14 18:48:24.381857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.381954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.382318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.382919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.226 [2024-12-14 18:48:24.383007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.226 [2024-12-14 18:48:24.383040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.383784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.383827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.383868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.383910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.383952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.383976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.384011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.384053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.384094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.227 [2024-12-14 18:48:24.384134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-12-14 18:48:24.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.227 [2024-12-14 18:48:24.384835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.384877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.384894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.384919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.384936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.384959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.385967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.385987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-12-14 18:48:24.386812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.228 [2024-12-14 18:48:24.386843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.386862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.386889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.386907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.386944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.386965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.386992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.229 [2024-12-14 18:48:24.387569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.387978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.387996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.229 [2024-12-14 18:48:24.388721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.229 [2024-12-14 18:48:24.388739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.229 10796.31 IOPS, 42.17 MiB/s [2024-12-14T18:48:44.982Z] 10161.24 IOPS, 39.69 MiB/s [2024-12-14T18:48:44.982Z] 9596.72 IOPS, 37.49 MiB/s [2024-12-14T18:48:44.982Z] 9091.63 IOPS, 35.51 MiB/s [2024-12-14T18:48:44.982Z] 8742.80 IOPS, 34.15 MiB/s [2024-12-14T18:48:44.982Z] 8842.57 IOPS, 34.54 MiB/s [2024-12-14T18:48:44.982Z] 8922.36 IOPS, 34.85 MiB/s [2024-12-14T18:48:44.982Z] 9009.96 IOPS, 35.20 MiB/s [2024-12-14T18:48:44.982Z] 9173.29 IOPS, 35.83 MiB/s [2024-12-14T18:48:44.983Z] 9218.56 IOPS, 36.01 MiB/s [2024-12-14T18:48:44.983Z] 9264.73 IOPS, 36.19 MiB/s [2024-12-14T18:48:44.983Z] 9311.07 IOPS, 36.37 MiB/s [2024-12-14T18:48:44.983Z] 9354.64 IOPS, 36.54 MiB/s [2024-12-14T18:48:44.983Z] 9392.03 IOPS, 36.69 MiB/s [2024-12-14T18:48:44.983Z] 9400.70 IOPS, 36.72 MiB/s [2024-12-14T18:48:44.983Z] 9441.42 IOPS, 36.88 MiB/s [2024-12-14T18:48:44.983Z] 9452.03 IOPS, 36.92 MiB/s [2024-12-14T18:48:44.983Z] 9464.39 IOPS, 36.97 MiB/s [2024-12-14T18:48:44.983Z] [2024-12-14 18:48:41.702280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.702685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.702723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.702762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.702800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.702822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.702839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.230 [2024-12-14 18:48:41.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.703929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.703947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.704121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.704167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.704205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.704253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.230 [2024-12-14 18:48:41.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.230 [2024-12-14 18:48:41.704772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.704802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.704831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.704874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.704891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.705095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.705488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.706500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.706527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.706545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.706570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.706587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.707681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.707893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.708155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.708238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.708280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.708318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.708356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.231 [2024-12-14 18:48:41.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.231 [2024-12-14 18:48:41.708590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.231 [2024-12-14 18:48:41.708642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.708661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.708683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.708698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.708720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.708736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.708757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.708797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.708820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.708838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.708860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.708890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.709607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.709678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.709698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.709720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.709737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.710831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.710848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.711064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.711742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.711803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.711922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.711944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.711961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.712144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.712203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.712246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.232 [2024-12-14 18:48:41.712284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.232 [2024-12-14 18:48:41.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.232 [2024-12-14 18:48:41.712343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.712600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.712869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.712891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.712908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.713363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.713408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.713713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.713757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.713797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.713836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.713886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.713928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.713950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.713966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.714101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.714127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.715391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.715431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.715957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.715978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.716007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.716053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.716100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.716139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.233 [2024-12-14 18:48:41.716176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.233 [2024-12-14 18:48:41.716705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.233 [2024-12-14 18:48:41.716740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.716763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.716791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.716927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.716952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.716985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.717027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.717043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.717064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.717104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.717120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.717158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.718190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.718218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.718245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.718263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.718285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.718302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.718324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.718340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.719543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.719579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.719647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.719691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.719794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.720049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.720093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.720323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.234 [2024-12-14 18:48:41.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.721140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.721167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.721194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.234 [2024-12-14 18:48:41.721213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.234 [2024-12-14 18:48:41.721235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.721365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.721403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.721552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.721589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.721979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.721998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.722802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.722824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.722840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.724328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.235 [2024-12-14 18:48:41.724370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.235 [2024-12-14 18:48:41.724740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.235 [2024-12-14 18:48:41.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.724795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.724813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.724836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.724853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.724875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.724892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.724914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.724934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.724971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.724987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.725063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.725357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.725443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.725485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.725719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.725737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.726123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.726167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.726204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.726240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.726279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.726300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.726316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.727515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.727551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.236 [2024-12-14 18:48:41.727660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.727702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.727761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.727778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.728467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.728495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.728521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.728539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.728572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.236 [2024-12-14 18:48:41.728643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.236 [2024-12-14 18:48:41.728666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.728690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.728707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.728729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.728746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.728769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.728786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.728808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.728825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.729938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.729961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.729992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.730586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.730653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.730676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.730719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.730742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.731390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.731435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.731474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.731837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.731859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.237 [2024-12-14 18:48:41.731876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.237 [2024-12-14 18:48:41.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.237 [2024-12-14 18:48:41.732115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.732160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.732198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.732293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.732340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.732414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.733147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.733193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.733256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.733296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.733341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.733381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.733420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.733908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.734010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.734066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.734254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.734372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.734829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.734852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.734869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.735031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.735076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.735116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.735157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.735196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.238 [2024-12-14 18:48:41.735249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.238 [2024-12-14 18:48:41.735369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.238 [2024-12-14 18:48:41.735394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.735420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.735437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.735459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.735487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.736287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.736315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.736341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.736359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.736382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.736398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.737739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.737939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.737966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.738691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.738714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.239 [2024-12-14 18:48:41.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.739881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.739910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.239 [2024-12-14 18:48:41.739938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.239 [2024-12-14 18:48:41.739956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.739978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.739999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.740892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.740960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.740992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.741867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.741952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.741971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.742292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.742363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.742494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.240 [2024-12-14 18:48:41.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.742838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.240 [2024-12-14 18:48:41.742860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.240 [2024-12-14 18:48:41.742878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.743060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.743106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.743147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.743280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.743324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.743434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.743543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.745798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.745827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.745846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.745869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.745886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.745908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.745925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.745964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.746116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.746190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.746979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.746998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.747835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.747910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.241 [2024-12-14 18:48:41.747929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.748107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.241 [2024-12-14 18:48:41.748132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.241 [2024-12-14 18:48:41.748158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.748252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.748298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.748372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.748509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.748525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.749717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.749749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.749778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.749802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.749841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.749865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.749882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.749904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.749921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.750584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.750971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.750991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.751030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.751069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.751330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.242 [2024-12-14 18:48:41.751516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.242 [2024-12-14 18:48:41.751561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.242 [2024-12-14 18:48:41.751588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.751632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.751673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.751715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.751732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.751754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.751779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.752625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.752690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.752722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.752742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.752765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.752782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.752818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.752837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.752860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.754584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.754621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.754655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.243 [2024-12-14 18:48:41.755310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.755385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.755437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.243 [2024-12-14 18:48:41.755461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.243 [2024-12-14 18:48:41.755478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.243 9453.18 IOPS, 36.93 MiB/s [2024-12-14T18:48:44.996Z] 9440.46 IOPS, 36.88 MiB/s [2024-12-14T18:48:44.996Z] 9427.19 IOPS, 36.82 MiB/s [2024-12-14T18:48:44.996Z] Received shutdown signal, test time was about 36.044820 seconds 00:28:29.243 00:28:29.243 Latency(us) 00:28:29.243 [2024-12-14T18:48:44.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.243 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:29.243 Verification LBA range: start 0x0 length 0x4000 00:28:29.243 Nvme0n1 : 36.04 9424.30 36.81 0.00 0.00 13556.00 148.01 4026531.84 00:28:29.243 [2024-12-14T18:48:44.996Z] =================================================================================================================== 00:28:29.243 [2024-12-14T18:48:44.996Z] Total : 9424.30 36.81 0.00 0.00 13556.00 148.01 4026531.84 00:28:29.243 [2024-12-14 18:48:44.617950] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:28:29.243 18:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.503 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:29.503 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:29.503 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:29.503 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:29.503 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.763 rmmod nvme_tcp 00:28:29.763 rmmod nvme_fabrics 00:28:29.763 rmmod nvme_keyring 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 110282 ']' 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 110282 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 110282 ']' 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 110282 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110282 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:29.763 killing process with pid 110282 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110282' 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 110282 00:28:29.763 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 110282 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:30.021 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:28:30.280 00:28:30.280 real 0m42.525s 00:28:30.280 user 2m18.752s 00:28:30.280 sys 0m10.976s 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.280 ************************************ 00:28:30.280 END TEST nvmf_host_multipath_status 00:28:30.280 ************************************ 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.280 ************************************ 00:28:30.280 START TEST nvmf_discovery_remove_ifc 00:28:30.280 ************************************ 00:28:30.280 18:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:30.280 * Looking for test storage... 00:28:30.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:30.280 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:30.280 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:28:30.280 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.539 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.540 --rc genhtml_branch_coverage=1 00:28:30.540 --rc genhtml_function_coverage=1 00:28:30.540 --rc genhtml_legend=1 00:28:30.540 --rc geninfo_all_blocks=1 00:28:30.540 --rc geninfo_unexecuted_blocks=1 00:28:30.540 00:28:30.540 ' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.540 --rc genhtml_branch_coverage=1 00:28:30.540 --rc genhtml_function_coverage=1 00:28:30.540 --rc genhtml_legend=1 00:28:30.540 --rc geninfo_all_blocks=1 00:28:30.540 --rc geninfo_unexecuted_blocks=1 00:28:30.540 00:28:30.540 ' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.540 --rc genhtml_branch_coverage=1 00:28:30.540 --rc genhtml_function_coverage=1 00:28:30.540 --rc genhtml_legend=1 00:28:30.540 --rc geninfo_all_blocks=1 00:28:30.540 --rc geninfo_unexecuted_blocks=1 00:28:30.540 00:28:30.540 ' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.540 --rc genhtml_branch_coverage=1 00:28:30.540 --rc genhtml_function_coverage=1 00:28:30.540 --rc genhtml_legend=1 00:28:30.540 --rc geninfo_all_blocks=1 00:28:30.540 --rc geninfo_unexecuted_blocks=1 00:28:30.540 00:28:30.540 ' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:30.540 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:30.541 Cannot find device "nvmf_init_br" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:30.541 Cannot find device "nvmf_init_br2" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:30.541 Cannot find device "nvmf_tgt_br" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:30.541 Cannot find device "nvmf_tgt_br2" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:30.541 Cannot find device "nvmf_init_br" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:30.541 Cannot find device "nvmf_init_br2" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:30.541 Cannot find device "nvmf_tgt_br" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:30.541 Cannot find device "nvmf_tgt_br2" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:30.541 Cannot find device "nvmf_br" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:30.541 Cannot find device "nvmf_init_if" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:30.541 Cannot find device "nvmf_init_if2" 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:30.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:30.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:30.541 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:30.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:30.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:28:30.800 00:28:30.800 --- 10.0.0.3 ping statistics --- 00:28:30.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.800 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:30.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:30.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:28:30.800 00:28:30.800 --- 10.0.0.4 ping statistics --- 00:28:30.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.800 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:30.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:28:30.800 00:28:30.800 --- 10.0.0.1 ping statistics --- 00:28:30.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.800 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:30.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:28:30.800 00:28:30.800 --- 10.0.0.2 ping statistics --- 00:28:30.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.800 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=111755 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 111755 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 111755 ']' 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.800 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.801 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.801 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.801 18:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:31.059 [2024-12-14 18:48:46.602637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:31.059 [2024-12-14 18:48:46.602721] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.059 [2024-12-14 18:48:46.746871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.318 [2024-12-14 18:48:46.832786] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.318 [2024-12-14 18:48:46.832861] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.318 [2024-12-14 18:48:46.832876] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.318 [2024-12-14 18:48:46.832887] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.318 [2024-12-14 18:48:46.832897] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.318 [2024-12-14 18:48:46.832935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.253 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.254 [2024-12-14 18:48:47.718305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.254 [2024-12-14 18:48:47.726444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:32.254 null0 00:28:32.254 [2024-12-14 18:48:47.758336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=111805 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 111805 /tmp/host.sock 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 111805 ']' 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:32.254 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.254 18:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.254 [2024-12-14 18:48:47.832257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:32.254 [2024-12-14 18:48:47.832356] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111805 ] 00:28:32.254 [2024-12-14 18:48:47.967661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.512 [2024-12-14 18:48:48.048171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.512 18:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.889 [2024-12-14 18:48:49.281307] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:33.889 [2024-12-14 18:48:49.281341] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:33.889 [2024-12-14 18:48:49.281358] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:33.889 [2024-12-14 18:48:49.367403] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:28:33.889 [2024-12-14 18:48:49.424070] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:33.889 [2024-12-14 18:48:49.424134] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:33.889 [2024-12-14 18:48:49.424160] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:33.889 [2024-12-14 18:48:49.424175] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:33.889 [2024-12-14 18:48:49.424192] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:33.889 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.889 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:33.889 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.889 [2024-12-14 18:48:49.429802] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x849a10 was disconnected and freed. delete nvme_qpair. 00:28:33.889 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:33.890 18:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.825 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.083 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.083 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:35.083 18:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.017 18:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.952 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.952 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.952 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.953 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.953 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.953 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.953 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.953 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.211 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.211 18:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.147 18:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.104 18:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.376 [2024-12-14 18:48:54.852123] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:39.376 [2024-12-14 18:48:54.852182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.376 [2024-12-14 18:48:54.852198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.376 [2024-12-14 18:48:54.852209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.376 [2024-12-14 18:48:54.852217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.376 [2024-12-14 18:48:54.852225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.376 [2024-12-14 18:48:54.852233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.376 [2024-12-14 18:48:54.852242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.376 [2024-12-14 18:48:54.852249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.376 [2024-12-14 18:48:54.852257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.376 [2024-12-14 18:48:54.852265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.376 [2024-12-14 18:48:54.852273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265c0 is same with the state(6) to be set 00:28:39.376 [2024-12-14 18:48:54.862119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8265c0 (9): Bad file descriptor 00:28:39.376 [2024-12-14 18:48:54.872143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.316 [2024-12-14 18:48:55.903708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:40.316 [2024-12-14 18:48:55.903810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8265c0 with addr=10.0.0.3, port=4420 00:28:40.316 [2024-12-14 18:48:55.903840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265c0 is same with the state(6) to be set 00:28:40.316 [2024-12-14 18:48:55.903895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8265c0 (9): Bad file descriptor 00:28:40.316 [2024-12-14 18:48:55.904646] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:40.316 [2024-12-14 18:48:55.904728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.316 [2024-12-14 18:48:55.904750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.316 [2024-12-14 18:48:55.904771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.316 [2024-12-14 18:48:55.904804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.316 [2024-12-14 18:48:55.904823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.316 18:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.250 [2024-12-14 18:48:56.904867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.250 [2024-12-14 18:48:56.904901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.250 [2024-12-14 18:48:56.904911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.250 [2024-12-14 18:48:56.904919] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:41.250 [2024-12-14 18:48:56.904934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.250 [2024-12-14 18:48:56.904959] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:28:41.250 [2024-12-14 18:48:56.904985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.250 [2024-12-14 18:48:56.904998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.250 [2024-12-14 18:48:56.905009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.250 [2024-12-14 18:48:56.905017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.250 [2024-12-14 18:48:56.905026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.250 [2024-12-14 18:48:56.905034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.250 [2024-12-14 18:48:56.905042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.250 [2024-12-14 18:48:56.905050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.250 [2024-12-14 18:48:56.905058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.250 [2024-12-14 18:48:56.905065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.250 [2024-12-14 18:48:56.905073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:41.250 [2024-12-14 18:48:56.905756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x815d20 (9): Bad file descriptor 00:28:41.250 [2024-12-14 18:48:56.906768] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:41.250 [2024-12-14 18:48:56.906788] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:41.250 18:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:41.509 18:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:42.443 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.443 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.443 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:42.444 18:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.378 [2024-12-14 18:48:58.912588] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:43.378 [2024-12-14 18:48:58.912608] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:43.378 [2024-12-14 18:48:58.912625] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:43.378 [2024-12-14 18:48:58.998746] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:28:43.378 [2024-12-14 18:48:59.054698] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:43.378 [2024-12-14 18:48:59.054742] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:43.378 [2024-12-14 18:48:59.054763] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:43.378 [2024-12-14 18:48:59.054779] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:28:43.378 [2024-12-14 18:48:59.054787] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:43.378 [2024-12-14 18:48:59.061184] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x823ca0 was disconnected and freed. delete nvme_qpair. 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 111805 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 111805 ']' 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 111805 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111805 00:28:43.636 killing process with pid 111805 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111805' 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 111805 00:28:43.636 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 111805 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.895 rmmod nvme_tcp 00:28:43.895 rmmod nvme_fabrics 00:28:43.895 rmmod nvme_keyring 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 111755 ']' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 111755 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 111755 ']' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 111755 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111755 00:28:43.895 killing process with pid 111755 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111755' 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 111755 00:28:43.895 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 111755 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:44.153 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:44.412 18:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:28:44.412 ************************************ 00:28:44.412 END TEST nvmf_discovery_remove_ifc 00:28:44.412 ************************************ 00:28:44.412 00:28:44.412 real 0m14.179s 00:28:44.412 user 0m24.620s 00:28:44.412 sys 0m1.779s 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.412 ************************************ 00:28:44.412 START TEST nvmf_identify_kernel_target 00:28:44.412 ************************************ 00:28:44.412 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:44.671 * Looking for test storage... 00:28:44.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:44.671 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:44.671 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:28:44.671 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:44.671 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:44.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.672 --rc genhtml_branch_coverage=1 00:28:44.672 --rc genhtml_function_coverage=1 00:28:44.672 --rc genhtml_legend=1 00:28:44.672 --rc geninfo_all_blocks=1 00:28:44.672 --rc geninfo_unexecuted_blocks=1 00:28:44.672 00:28:44.672 ' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:44.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.672 --rc genhtml_branch_coverage=1 00:28:44.672 --rc genhtml_function_coverage=1 00:28:44.672 --rc genhtml_legend=1 00:28:44.672 --rc geninfo_all_blocks=1 00:28:44.672 --rc geninfo_unexecuted_blocks=1 00:28:44.672 00:28:44.672 ' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:44.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.672 --rc genhtml_branch_coverage=1 00:28:44.672 --rc genhtml_function_coverage=1 00:28:44.672 --rc genhtml_legend=1 00:28:44.672 --rc geninfo_all_blocks=1 00:28:44.672 --rc geninfo_unexecuted_blocks=1 00:28:44.672 00:28:44.672 ' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:44.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.672 --rc genhtml_branch_coverage=1 00:28:44.672 --rc genhtml_function_coverage=1 00:28:44.672 --rc genhtml_legend=1 00:28:44.672 --rc geninfo_all_blocks=1 00:28:44.672 --rc geninfo_unexecuted_blocks=1 00:28:44.672 00:28:44.672 ' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:44.672 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:44.673 Cannot find device "nvmf_init_br" 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:44.673 Cannot find device "nvmf_init_br2" 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:28:44.673 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:44.932 Cannot find device "nvmf_tgt_br" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:44.932 Cannot find device "nvmf_tgt_br2" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:44.932 Cannot find device "nvmf_init_br" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:44.932 Cannot find device "nvmf_init_br2" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:44.932 Cannot find device "nvmf_tgt_br" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:44.932 Cannot find device "nvmf_tgt_br2" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:44.932 Cannot find device "nvmf_br" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:44.932 Cannot find device "nvmf_init_if" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:44.932 Cannot find device "nvmf_init_if2" 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:44.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:44.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:44.932 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:45.191 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:45.191 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:45.191 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:45.191 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:45.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:45.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:28:45.192 00:28:45.192 --- 10.0.0.3 ping statistics --- 00:28:45.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.192 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:45.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:45.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:28:45.192 00:28:45.192 --- 10.0.0.4 ping statistics --- 00:28:45.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.192 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:45.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:45.192 00:28:45.192 --- 10.0.0.1 ping statistics --- 00:28:45.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.192 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:45.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:28:45.192 00:28:45.192 --- 10.0.0.2 ping statistics --- 00:28:45.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.192 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:45.192 18:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:45.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:45.710 Waiting for block devices as requested 00:28:45.710 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.710 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:45.710 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:45.969 No valid GPT data, bailing 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:45.969 No valid GPT data, bailing 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:45.969 No valid GPT data, bailing 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:45.969 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:45.970 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:46.228 No valid GPT data, bailing 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:46.228 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.1 -t tcp -s 4420 00:28:46.228 00:28:46.228 Discovery Log Number of Records 2, Generation counter 2 00:28:46.228 =====Discovery Log Entry 0====== 00:28:46.228 trtype: tcp 00:28:46.228 adrfam: ipv4 00:28:46.228 subtype: current discovery subsystem 00:28:46.228 treq: not specified, sq flow control disable supported 00:28:46.228 portid: 1 00:28:46.228 trsvcid: 4420 00:28:46.228 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:46.228 traddr: 10.0.0.1 00:28:46.228 eflags: none 00:28:46.228 sectype: none 00:28:46.228 =====Discovery Log Entry 1====== 00:28:46.228 trtype: tcp 00:28:46.228 adrfam: ipv4 00:28:46.228 subtype: nvme subsystem 00:28:46.228 treq: not specified, sq flow control disable supported 00:28:46.228 portid: 1 00:28:46.228 trsvcid: 4420 00:28:46.229 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:46.229 traddr: 10.0.0.1 00:28:46.229 eflags: none 00:28:46.229 sectype: none 00:28:46.229 18:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:46.229 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:46.488 ===================================================== 00:28:46.488 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:46.488 ===================================================== 00:28:46.488 Controller Capabilities/Features 00:28:46.488 ================================ 00:28:46.488 Vendor ID: 0000 00:28:46.488 Subsystem Vendor ID: 0000 00:28:46.488 Serial Number: c097922611529d3f337f 00:28:46.488 Model Number: Linux 00:28:46.488 Firmware Version: 6.8.9-20 00:28:46.488 Recommended Arb Burst: 0 00:28:46.488 IEEE OUI Identifier: 00 00 00 00:28:46.488 Multi-path I/O 00:28:46.488 May have multiple subsystem ports: No 00:28:46.488 May have multiple controllers: No 00:28:46.488 Associated with SR-IOV VF: No 00:28:46.488 Max Data Transfer Size: Unlimited 00:28:46.488 Max Number of Namespaces: 0 00:28:46.488 Max Number of I/O Queues: 1024 00:28:46.488 NVMe Specification Version (VS): 1.3 00:28:46.488 NVMe Specification Version (Identify): 1.3 00:28:46.488 Maximum Queue Entries: 1024 00:28:46.488 Contiguous Queues Required: No 00:28:46.488 Arbitration Mechanisms Supported 00:28:46.488 Weighted Round Robin: Not Supported 00:28:46.488 Vendor Specific: Not Supported 00:28:46.488 Reset Timeout: 7500 ms 00:28:46.488 Doorbell Stride: 4 bytes 00:28:46.488 NVM Subsystem Reset: Not Supported 00:28:46.488 Command Sets Supported 00:28:46.488 NVM Command Set: Supported 00:28:46.488 Boot Partition: Not Supported 00:28:46.488 Memory Page Size Minimum: 4096 bytes 00:28:46.488 Memory Page Size Maximum: 4096 bytes 00:28:46.489 Persistent Memory Region: Not Supported 00:28:46.489 Optional Asynchronous Events Supported 00:28:46.489 Namespace Attribute Notices: Not Supported 00:28:46.489 Firmware Activation Notices: Not Supported 00:28:46.489 ANA Change Notices: Not Supported 00:28:46.489 PLE Aggregate Log Change Notices: Not Supported 00:28:46.489 LBA Status Info Alert Notices: Not Supported 00:28:46.489 EGE Aggregate Log Change Notices: Not Supported 00:28:46.489 Normal NVM Subsystem Shutdown event: Not Supported 00:28:46.489 Zone Descriptor Change Notices: Not Supported 00:28:46.489 Discovery Log Change Notices: Supported 00:28:46.489 Controller Attributes 00:28:46.489 128-bit Host Identifier: Not Supported 00:28:46.489 Non-Operational Permissive Mode: Not Supported 00:28:46.489 NVM Sets: Not Supported 00:28:46.489 Read Recovery Levels: Not Supported 00:28:46.489 Endurance Groups: Not Supported 00:28:46.489 Predictable Latency Mode: Not Supported 00:28:46.489 Traffic Based Keep ALive: Not Supported 00:28:46.489 Namespace Granularity: Not Supported 00:28:46.489 SQ Associations: Not Supported 00:28:46.489 UUID List: Not Supported 00:28:46.489 Multi-Domain Subsystem: Not Supported 00:28:46.489 Fixed Capacity Management: Not Supported 00:28:46.489 Variable Capacity Management: Not Supported 00:28:46.489 Delete Endurance Group: Not Supported 00:28:46.489 Delete NVM Set: Not Supported 00:28:46.489 Extended LBA Formats Supported: Not Supported 00:28:46.489 Flexible Data Placement Supported: Not Supported 00:28:46.489 00:28:46.489 Controller Memory Buffer Support 00:28:46.489 ================================ 00:28:46.489 Supported: No 00:28:46.489 00:28:46.489 Persistent Memory Region Support 00:28:46.489 ================================ 00:28:46.489 Supported: No 00:28:46.489 00:28:46.489 Admin Command Set Attributes 00:28:46.489 ============================ 00:28:46.489 Security Send/Receive: Not Supported 00:28:46.489 Format NVM: Not Supported 00:28:46.489 Firmware Activate/Download: Not Supported 00:28:46.489 Namespace Management: Not Supported 00:28:46.489 Device Self-Test: Not Supported 00:28:46.489 Directives: Not Supported 00:28:46.489 NVMe-MI: Not Supported 00:28:46.489 Virtualization Management: Not Supported 00:28:46.489 Doorbell Buffer Config: Not Supported 00:28:46.489 Get LBA Status Capability: Not Supported 00:28:46.489 Command & Feature Lockdown Capability: Not Supported 00:28:46.489 Abort Command Limit: 1 00:28:46.489 Async Event Request Limit: 1 00:28:46.489 Number of Firmware Slots: N/A 00:28:46.489 Firmware Slot 1 Read-Only: N/A 00:28:46.489 Firmware Activation Without Reset: N/A 00:28:46.489 Multiple Update Detection Support: N/A 00:28:46.489 Firmware Update Granularity: No Information Provided 00:28:46.489 Per-Namespace SMART Log: No 00:28:46.489 Asymmetric Namespace Access Log Page: Not Supported 00:28:46.489 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:46.489 Command Effects Log Page: Not Supported 00:28:46.489 Get Log Page Extended Data: Supported 00:28:46.489 Telemetry Log Pages: Not Supported 00:28:46.489 Persistent Event Log Pages: Not Supported 00:28:46.489 Supported Log Pages Log Page: May Support 00:28:46.489 Commands Supported & Effects Log Page: Not Supported 00:28:46.489 Feature Identifiers & Effects Log Page:May Support 00:28:46.489 NVMe-MI Commands & Effects Log Page: May Support 00:28:46.489 Data Area 4 for Telemetry Log: Not Supported 00:28:46.489 Error Log Page Entries Supported: 1 00:28:46.489 Keep Alive: Not Supported 00:28:46.489 00:28:46.489 NVM Command Set Attributes 00:28:46.489 ========================== 00:28:46.489 Submission Queue Entry Size 00:28:46.489 Max: 1 00:28:46.489 Min: 1 00:28:46.489 Completion Queue Entry Size 00:28:46.489 Max: 1 00:28:46.489 Min: 1 00:28:46.489 Number of Namespaces: 0 00:28:46.489 Compare Command: Not Supported 00:28:46.489 Write Uncorrectable Command: Not Supported 00:28:46.489 Dataset Management Command: Not Supported 00:28:46.489 Write Zeroes Command: Not Supported 00:28:46.489 Set Features Save Field: Not Supported 00:28:46.489 Reservations: Not Supported 00:28:46.489 Timestamp: Not Supported 00:28:46.489 Copy: Not Supported 00:28:46.489 Volatile Write Cache: Not Present 00:28:46.489 Atomic Write Unit (Normal): 1 00:28:46.489 Atomic Write Unit (PFail): 1 00:28:46.489 Atomic Compare & Write Unit: 1 00:28:46.489 Fused Compare & Write: Not Supported 00:28:46.489 Scatter-Gather List 00:28:46.489 SGL Command Set: Supported 00:28:46.489 SGL Keyed: Not Supported 00:28:46.489 SGL Bit Bucket Descriptor: Not Supported 00:28:46.489 SGL Metadata Pointer: Not Supported 00:28:46.489 Oversized SGL: Not Supported 00:28:46.489 SGL Metadata Address: Not Supported 00:28:46.489 SGL Offset: Supported 00:28:46.489 Transport SGL Data Block: Not Supported 00:28:46.489 Replay Protected Memory Block: Not Supported 00:28:46.489 00:28:46.489 Firmware Slot Information 00:28:46.489 ========================= 00:28:46.489 Active slot: 0 00:28:46.489 00:28:46.489 00:28:46.489 Error Log 00:28:46.489 ========= 00:28:46.489 00:28:46.489 Active Namespaces 00:28:46.489 ================= 00:28:46.489 Discovery Log Page 00:28:46.489 ================== 00:28:46.489 Generation Counter: 2 00:28:46.489 Number of Records: 2 00:28:46.489 Record Format: 0 00:28:46.489 00:28:46.489 Discovery Log Entry 0 00:28:46.489 ---------------------- 00:28:46.489 Transport Type: 3 (TCP) 00:28:46.489 Address Family: 1 (IPv4) 00:28:46.489 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:46.489 Entry Flags: 00:28:46.489 Duplicate Returned Information: 0 00:28:46.489 Explicit Persistent Connection Support for Discovery: 0 00:28:46.489 Transport Requirements: 00:28:46.489 Secure Channel: Not Specified 00:28:46.489 Port ID: 1 (0x0001) 00:28:46.489 Controller ID: 65535 (0xffff) 00:28:46.489 Admin Max SQ Size: 32 00:28:46.489 Transport Service Identifier: 4420 00:28:46.489 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:46.489 Transport Address: 10.0.0.1 00:28:46.489 Discovery Log Entry 1 00:28:46.489 ---------------------- 00:28:46.489 Transport Type: 3 (TCP) 00:28:46.489 Address Family: 1 (IPv4) 00:28:46.489 Subsystem Type: 2 (NVM Subsystem) 00:28:46.489 Entry Flags: 00:28:46.489 Duplicate Returned Information: 0 00:28:46.489 Explicit Persistent Connection Support for Discovery: 0 00:28:46.489 Transport Requirements: 00:28:46.489 Secure Channel: Not Specified 00:28:46.489 Port ID: 1 (0x0001) 00:28:46.489 Controller ID: 65535 (0xffff) 00:28:46.489 Admin Max SQ Size: 32 00:28:46.489 Transport Service Identifier: 4420 00:28:46.489 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:46.489 Transport Address: 10.0.0.1 00:28:46.489 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.489 get_feature(0x01) failed 00:28:46.489 get_feature(0x02) failed 00:28:46.489 get_feature(0x04) failed 00:28:46.489 ===================================================== 00:28:46.489 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.489 ===================================================== 00:28:46.489 Controller Capabilities/Features 00:28:46.489 ================================ 00:28:46.489 Vendor ID: 0000 00:28:46.489 Subsystem Vendor ID: 0000 00:28:46.489 Serial Number: 5b2879b0a6608b2b2f9c 00:28:46.489 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:46.489 Firmware Version: 6.8.9-20 00:28:46.489 Recommended Arb Burst: 6 00:28:46.489 IEEE OUI Identifier: 00 00 00 00:28:46.489 Multi-path I/O 00:28:46.489 May have multiple subsystem ports: Yes 00:28:46.489 May have multiple controllers: Yes 00:28:46.489 Associated with SR-IOV VF: No 00:28:46.489 Max Data Transfer Size: Unlimited 00:28:46.489 Max Number of Namespaces: 1024 00:28:46.489 Max Number of I/O Queues: 128 00:28:46.489 NVMe Specification Version (VS): 1.3 00:28:46.489 NVMe Specification Version (Identify): 1.3 00:28:46.489 Maximum Queue Entries: 1024 00:28:46.489 Contiguous Queues Required: No 00:28:46.489 Arbitration Mechanisms Supported 00:28:46.489 Weighted Round Robin: Not Supported 00:28:46.489 Vendor Specific: Not Supported 00:28:46.489 Reset Timeout: 7500 ms 00:28:46.489 Doorbell Stride: 4 bytes 00:28:46.489 NVM Subsystem Reset: Not Supported 00:28:46.489 Command Sets Supported 00:28:46.489 NVM Command Set: Supported 00:28:46.489 Boot Partition: Not Supported 00:28:46.489 Memory Page Size Minimum: 4096 bytes 00:28:46.489 Memory Page Size Maximum: 4096 bytes 00:28:46.489 Persistent Memory Region: Not Supported 00:28:46.489 Optional Asynchronous Events Supported 00:28:46.489 Namespace Attribute Notices: Supported 00:28:46.489 Firmware Activation Notices: Not Supported 00:28:46.489 ANA Change Notices: Supported 00:28:46.489 PLE Aggregate Log Change Notices: Not Supported 00:28:46.489 LBA Status Info Alert Notices: Not Supported 00:28:46.490 EGE Aggregate Log Change Notices: Not Supported 00:28:46.490 Normal NVM Subsystem Shutdown event: Not Supported 00:28:46.490 Zone Descriptor Change Notices: Not Supported 00:28:46.490 Discovery Log Change Notices: Not Supported 00:28:46.490 Controller Attributes 00:28:46.490 128-bit Host Identifier: Supported 00:28:46.490 Non-Operational Permissive Mode: Not Supported 00:28:46.490 NVM Sets: Not Supported 00:28:46.490 Read Recovery Levels: Not Supported 00:28:46.490 Endurance Groups: Not Supported 00:28:46.490 Predictable Latency Mode: Not Supported 00:28:46.490 Traffic Based Keep ALive: Supported 00:28:46.490 Namespace Granularity: Not Supported 00:28:46.490 SQ Associations: Not Supported 00:28:46.490 UUID List: Not Supported 00:28:46.490 Multi-Domain Subsystem: Not Supported 00:28:46.490 Fixed Capacity Management: Not Supported 00:28:46.490 Variable Capacity Management: Not Supported 00:28:46.490 Delete Endurance Group: Not Supported 00:28:46.490 Delete NVM Set: Not Supported 00:28:46.490 Extended LBA Formats Supported: Not Supported 00:28:46.490 Flexible Data Placement Supported: Not Supported 00:28:46.490 00:28:46.490 Controller Memory Buffer Support 00:28:46.490 ================================ 00:28:46.490 Supported: No 00:28:46.490 00:28:46.490 Persistent Memory Region Support 00:28:46.490 ================================ 00:28:46.490 Supported: No 00:28:46.490 00:28:46.490 Admin Command Set Attributes 00:28:46.490 ============================ 00:28:46.490 Security Send/Receive: Not Supported 00:28:46.490 Format NVM: Not Supported 00:28:46.490 Firmware Activate/Download: Not Supported 00:28:46.490 Namespace Management: Not Supported 00:28:46.490 Device Self-Test: Not Supported 00:28:46.490 Directives: Not Supported 00:28:46.490 NVMe-MI: Not Supported 00:28:46.490 Virtualization Management: Not Supported 00:28:46.490 Doorbell Buffer Config: Not Supported 00:28:46.490 Get LBA Status Capability: Not Supported 00:28:46.490 Command & Feature Lockdown Capability: Not Supported 00:28:46.490 Abort Command Limit: 4 00:28:46.490 Async Event Request Limit: 4 00:28:46.490 Number of Firmware Slots: N/A 00:28:46.490 Firmware Slot 1 Read-Only: N/A 00:28:46.490 Firmware Activation Without Reset: N/A 00:28:46.490 Multiple Update Detection Support: N/A 00:28:46.490 Firmware Update Granularity: No Information Provided 00:28:46.490 Per-Namespace SMART Log: Yes 00:28:46.490 Asymmetric Namespace Access Log Page: Supported 00:28:46.490 ANA Transition Time : 10 sec 00:28:46.490 00:28:46.490 Asymmetric Namespace Access Capabilities 00:28:46.490 ANA Optimized State : Supported 00:28:46.490 ANA Non-Optimized State : Supported 00:28:46.490 ANA Inaccessible State : Supported 00:28:46.490 ANA Persistent Loss State : Supported 00:28:46.490 ANA Change State : Supported 00:28:46.490 ANAGRPID is not changed : No 00:28:46.490 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:46.490 00:28:46.490 ANA Group Identifier Maximum : 128 00:28:46.490 Number of ANA Group Identifiers : 128 00:28:46.490 Max Number of Allowed Namespaces : 1024 00:28:46.490 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:46.490 Command Effects Log Page: Supported 00:28:46.490 Get Log Page Extended Data: Supported 00:28:46.490 Telemetry Log Pages: Not Supported 00:28:46.490 Persistent Event Log Pages: Not Supported 00:28:46.490 Supported Log Pages Log Page: May Support 00:28:46.490 Commands Supported & Effects Log Page: Not Supported 00:28:46.490 Feature Identifiers & Effects Log Page:May Support 00:28:46.490 NVMe-MI Commands & Effects Log Page: May Support 00:28:46.490 Data Area 4 for Telemetry Log: Not Supported 00:28:46.490 Error Log Page Entries Supported: 128 00:28:46.490 Keep Alive: Supported 00:28:46.490 Keep Alive Granularity: 1000 ms 00:28:46.490 00:28:46.490 NVM Command Set Attributes 00:28:46.490 ========================== 00:28:46.490 Submission Queue Entry Size 00:28:46.490 Max: 64 00:28:46.490 Min: 64 00:28:46.490 Completion Queue Entry Size 00:28:46.490 Max: 16 00:28:46.490 Min: 16 00:28:46.490 Number of Namespaces: 1024 00:28:46.490 Compare Command: Not Supported 00:28:46.490 Write Uncorrectable Command: Not Supported 00:28:46.490 Dataset Management Command: Supported 00:28:46.490 Write Zeroes Command: Supported 00:28:46.490 Set Features Save Field: Not Supported 00:28:46.490 Reservations: Not Supported 00:28:46.490 Timestamp: Not Supported 00:28:46.490 Copy: Not Supported 00:28:46.490 Volatile Write Cache: Present 00:28:46.490 Atomic Write Unit (Normal): 1 00:28:46.490 Atomic Write Unit (PFail): 1 00:28:46.490 Atomic Compare & Write Unit: 1 00:28:46.490 Fused Compare & Write: Not Supported 00:28:46.490 Scatter-Gather List 00:28:46.490 SGL Command Set: Supported 00:28:46.490 SGL Keyed: Not Supported 00:28:46.490 SGL Bit Bucket Descriptor: Not Supported 00:28:46.490 SGL Metadata Pointer: Not Supported 00:28:46.490 Oversized SGL: Not Supported 00:28:46.490 SGL Metadata Address: Not Supported 00:28:46.490 SGL Offset: Supported 00:28:46.490 Transport SGL Data Block: Not Supported 00:28:46.490 Replay Protected Memory Block: Not Supported 00:28:46.490 00:28:46.490 Firmware Slot Information 00:28:46.490 ========================= 00:28:46.490 Active slot: 0 00:28:46.490 00:28:46.490 Asymmetric Namespace Access 00:28:46.490 =========================== 00:28:46.490 Change Count : 0 00:28:46.490 Number of ANA Group Descriptors : 1 00:28:46.490 ANA Group Descriptor : 0 00:28:46.490 ANA Group ID : 1 00:28:46.490 Number of NSID Values : 1 00:28:46.490 Change Count : 0 00:28:46.490 ANA State : 1 00:28:46.490 Namespace Identifier : 1 00:28:46.490 00:28:46.490 Commands Supported and Effects 00:28:46.490 ============================== 00:28:46.490 Admin Commands 00:28:46.490 -------------- 00:28:46.490 Get Log Page (02h): Supported 00:28:46.490 Identify (06h): Supported 00:28:46.490 Abort (08h): Supported 00:28:46.490 Set Features (09h): Supported 00:28:46.490 Get Features (0Ah): Supported 00:28:46.490 Asynchronous Event Request (0Ch): Supported 00:28:46.490 Keep Alive (18h): Supported 00:28:46.490 I/O Commands 00:28:46.490 ------------ 00:28:46.490 Flush (00h): Supported 00:28:46.490 Write (01h): Supported LBA-Change 00:28:46.490 Read (02h): Supported 00:28:46.490 Write Zeroes (08h): Supported LBA-Change 00:28:46.490 Dataset Management (09h): Supported 00:28:46.490 00:28:46.490 Error Log 00:28:46.490 ========= 00:28:46.490 Entry: 0 00:28:46.490 Error Count: 0x3 00:28:46.490 Submission Queue Id: 0x0 00:28:46.490 Command Id: 0x5 00:28:46.490 Phase Bit: 0 00:28:46.490 Status Code: 0x2 00:28:46.490 Status Code Type: 0x0 00:28:46.490 Do Not Retry: 1 00:28:46.490 Error Location: 0x28 00:28:46.490 LBA: 0x0 00:28:46.490 Namespace: 0x0 00:28:46.490 Vendor Log Page: 0x0 00:28:46.490 ----------- 00:28:46.490 Entry: 1 00:28:46.490 Error Count: 0x2 00:28:46.490 Submission Queue Id: 0x0 00:28:46.490 Command Id: 0x5 00:28:46.490 Phase Bit: 0 00:28:46.490 Status Code: 0x2 00:28:46.490 Status Code Type: 0x0 00:28:46.490 Do Not Retry: 1 00:28:46.490 Error Location: 0x28 00:28:46.490 LBA: 0x0 00:28:46.490 Namespace: 0x0 00:28:46.490 Vendor Log Page: 0x0 00:28:46.490 ----------- 00:28:46.490 Entry: 2 00:28:46.490 Error Count: 0x1 00:28:46.490 Submission Queue Id: 0x0 00:28:46.490 Command Id: 0x4 00:28:46.490 Phase Bit: 0 00:28:46.490 Status Code: 0x2 00:28:46.490 Status Code Type: 0x0 00:28:46.490 Do Not Retry: 1 00:28:46.490 Error Location: 0x28 00:28:46.490 LBA: 0x0 00:28:46.490 Namespace: 0x0 00:28:46.490 Vendor Log Page: 0x0 00:28:46.490 00:28:46.490 Number of Queues 00:28:46.490 ================ 00:28:46.490 Number of I/O Submission Queues: 128 00:28:46.490 Number of I/O Completion Queues: 128 00:28:46.490 00:28:46.490 ZNS Specific Controller Data 00:28:46.490 ============================ 00:28:46.490 Zone Append Size Limit: 0 00:28:46.490 00:28:46.490 00:28:46.490 Active Namespaces 00:28:46.490 ================= 00:28:46.490 get_feature(0x05) failed 00:28:46.490 Namespace ID:1 00:28:46.490 Command Set Identifier: NVM (00h) 00:28:46.490 Deallocate: Supported 00:28:46.490 Deallocated/Unwritten Error: Not Supported 00:28:46.490 Deallocated Read Value: Unknown 00:28:46.490 Deallocate in Write Zeroes: Not Supported 00:28:46.490 Deallocated Guard Field: 0xFFFF 00:28:46.490 Flush: Supported 00:28:46.490 Reservation: Not Supported 00:28:46.490 Namespace Sharing Capabilities: Multiple Controllers 00:28:46.490 Size (in LBAs): 1310720 (5GiB) 00:28:46.490 Capacity (in LBAs): 1310720 (5GiB) 00:28:46.490 Utilization (in LBAs): 1310720 (5GiB) 00:28:46.490 UUID: a943a4da-9a72-414b-bc53-37b986380e77 00:28:46.490 Thin Provisioning: Not Supported 00:28:46.490 Per-NS Atomic Units: Yes 00:28:46.490 Atomic Boundary Size (Normal): 0 00:28:46.490 Atomic Boundary Size (PFail): 0 00:28:46.490 Atomic Boundary Offset: 0 00:28:46.490 NGUID/EUI64 Never Reused: No 00:28:46.490 ANA group ID: 1 00:28:46.490 Namespace Write Protected: No 00:28:46.490 Number of LBA Formats: 1 00:28:46.491 Current LBA Format: LBA Format #00 00:28:46.491 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:28:46.491 00:28:46.491 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:46.491 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:46.491 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.749 rmmod nvme_tcp 00:28:46.749 rmmod nvme_fabrics 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:46.749 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.750 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:28:47.009 18:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:47.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.836 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.836 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.836 ************************************ 00:28:47.836 END TEST nvmf_identify_kernel_target 00:28:47.836 ************************************ 00:28:47.836 00:28:47.836 real 0m3.309s 00:28:47.836 user 0m1.195s 00:28:47.836 sys 0m1.530s 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.836 ************************************ 00:28:47.836 START TEST nvmf_auth_host 00:28:47.836 ************************************ 00:28:47.836 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:48.096 * Looking for test storage... 00:28:48.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.096 --rc genhtml_branch_coverage=1 00:28:48.096 --rc genhtml_function_coverage=1 00:28:48.096 --rc genhtml_legend=1 00:28:48.096 --rc geninfo_all_blocks=1 00:28:48.096 --rc geninfo_unexecuted_blocks=1 00:28:48.096 00:28:48.096 ' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.096 --rc genhtml_branch_coverage=1 00:28:48.096 --rc genhtml_function_coverage=1 00:28:48.096 --rc genhtml_legend=1 00:28:48.096 --rc geninfo_all_blocks=1 00:28:48.096 --rc geninfo_unexecuted_blocks=1 00:28:48.096 00:28:48.096 ' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.096 --rc genhtml_branch_coverage=1 00:28:48.096 --rc genhtml_function_coverage=1 00:28:48.096 --rc genhtml_legend=1 00:28:48.096 --rc geninfo_all_blocks=1 00:28:48.096 --rc geninfo_unexecuted_blocks=1 00:28:48.096 00:28:48.096 ' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:48.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.096 --rc genhtml_branch_coverage=1 00:28:48.096 --rc genhtml_function_coverage=1 00:28:48.096 --rc genhtml_legend=1 00:28:48.096 --rc geninfo_all_blocks=1 00:28:48.096 --rc geninfo_unexecuted_blocks=1 00:28:48.096 00:28:48.096 ' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.096 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:48.097 Cannot find device "nvmf_init_br" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:48.097 Cannot find device "nvmf_init_br2" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:48.097 Cannot find device "nvmf_tgt_br" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.097 Cannot find device "nvmf_tgt_br2" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:48.097 Cannot find device "nvmf_init_br" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:48.097 Cannot find device "nvmf_init_br2" 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:28:48.097 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:48.356 Cannot find device "nvmf_tgt_br" 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:48.356 Cannot find device "nvmf_tgt_br2" 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:48.356 Cannot find device "nvmf_br" 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:48.356 Cannot find device "nvmf_init_if" 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:48.356 Cannot find device "nvmf_init_if2" 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:48.356 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:48.357 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:48.357 18:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:48.357 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:48.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:48.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:28:48.616 00:28:48.616 --- 10.0.0.3 ping statistics --- 00:28:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.616 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:48.616 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:48.616 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:28:48.616 00:28:48.616 --- 10.0.0.4 ping statistics --- 00:28:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.616 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:48.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:28:48.616 00:28:48.616 --- 10.0.0.1 ping statistics --- 00:28:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.616 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:48.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:28:48.616 00:28:48.616 --- 10.0.0.2 ping statistics --- 00:28:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.616 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.616 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=112798 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 112798 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 112798 ']' 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.617 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ac412a5372a7c265e7518c831f755d92 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.HbI 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ac412a5372a7c265e7518c831f755d92 0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ac412a5372a7c265e7518c831f755d92 0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ac412a5372a7c265e7518c831f755d92 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.HbI 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.HbI 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HbI 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c315233efcd05d08c6533795087ad20cff1443ae22a2f65f09befbde1152ee0c 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.9lV 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c315233efcd05d08c6533795087ad20cff1443ae22a2f65f09befbde1152ee0c 3 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c315233efcd05d08c6533795087ad20cff1443ae22a2f65f09befbde1152ee0c 3 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c315233efcd05d08c6533795087ad20cff1443ae22a2f65f09befbde1152ee0c 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.9lV 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.9lV 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9lV 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b61166a7cced1be6a386feb1f3b7cb65d15dc2f370a0fef0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.rqF 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b61166a7cced1be6a386feb1f3b7cb65d15dc2f370a0fef0 0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b61166a7cced1be6a386feb1f3b7cb65d15dc2f370a0fef0 0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b61166a7cced1be6a386feb1f3b7cb65d15dc2f370a0fef0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.rqF 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.rqF 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rqF 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:49.185 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d71e1147fdc66f58e587c19e36ce9075d59408c16548c196 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.tKR 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d71e1147fdc66f58e587c19e36ce9075d59408c16548c196 2 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d71e1147fdc66f58e587c19e36ce9075d59408c16548c196 2 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d71e1147fdc66f58e587c19e36ce9075d59408c16548c196 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:28:49.186 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.tKR 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.tKR 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tKR 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=dd00a5545466ca9c546b867874aea1ca 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.fVy 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key dd00a5545466ca9c546b867874aea1ca 1 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 dd00a5545466ca9c546b867874aea1ca 1 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=dd00a5545466ca9c546b867874aea1ca 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:28:49.445 18:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.fVy 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.fVy 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.fVy 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=a2ced39968303a386199f1057acd5090 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.HKF 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key a2ced39968303a386199f1057acd5090 1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 a2ced39968303a386199f1057acd5090 1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=a2ced39968303a386199f1057acd5090 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.HKF 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.HKF 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HKF 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3568075a7f2e21309177ba8e6af5d8c00f6b4b6a41e8604a 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.PuV 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3568075a7f2e21309177ba8e6af5d8c00f6b4b6a41e8604a 2 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3568075a7f2e21309177ba8e6af5d8c00f6b4b6a41e8604a 2 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3568075a7f2e21309177ba8e6af5d8c00f6b4b6a41e8604a 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.PuV 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.PuV 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PuV 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6cb15a7a0d7879c49a7932c2d3c5def1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.XLb 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6cb15a7a0d7879c49a7932c2d3c5def1 0 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6cb15a7a0d7879c49a7932c2d3c5def1 0 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6cb15a7a0d7879c49a7932c2d3c5def1 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:28:49.445 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.XLb 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.XLb 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XLb 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=19b56c4e86db29d556c16dd87e707ded4142711460bc470c749e4eb0cd6da7ef 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.05P 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 19b56c4e86db29d556c16dd87e707ded4142711460bc470c749e4eb0cd6da7ef 3 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 19b56c4e86db29d556c16dd87e707ded4142711460bc470c749e4eb0cd6da7ef 3 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=19b56c4e86db29d556c16dd87e707ded4142711460bc470c749e4eb0cd6da7ef 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.05P 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.05P 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.05P 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 112798 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 112798 ']' 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.704 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HbI 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9lV ]] 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9lV 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:49.962 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rqF 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tKR ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tKR 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.fVy 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HKF ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HKF 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PuV 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XLb ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XLb 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.05P 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:28:49.963 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:28:50.221 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:50.221 18:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:50.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.480 Waiting for block devices as requested 00:28:50.480 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:50.739 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:51.306 No valid GPT data, bailing 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:51.306 18:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:51.306 No valid GPT data, bailing 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:51.306 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:51.564 No valid GPT data, bailing 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:51.564 No valid GPT data, bailing 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:28:51.564 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.1 -t tcp -s 4420 00:28:51.565 00:28:51.565 Discovery Log Number of Records 2, Generation counter 2 00:28:51.565 =====Discovery Log Entry 0====== 00:28:51.565 trtype: tcp 00:28:51.565 adrfam: ipv4 00:28:51.565 subtype: current discovery subsystem 00:28:51.565 treq: not specified, sq flow control disable supported 00:28:51.565 portid: 1 00:28:51.565 trsvcid: 4420 00:28:51.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:51.565 traddr: 10.0.0.1 00:28:51.565 eflags: none 00:28:51.565 sectype: none 00:28:51.565 =====Discovery Log Entry 1====== 00:28:51.565 trtype: tcp 00:28:51.565 adrfam: ipv4 00:28:51.565 subtype: nvme subsystem 00:28:51.565 treq: not specified, sq flow control disable supported 00:28:51.565 portid: 1 00:28:51.565 trsvcid: 4420 00:28:51.565 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:51.565 traddr: 10.0.0.1 00:28:51.565 eflags: none 00:28:51.565 sectype: none 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.565 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.824 nvme0n1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.824 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 nvme0n1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 nvme0n1 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.082 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 nvme0n1 00:28:52.341 18:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.341 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.600 nvme0n1 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.600 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.863 nvme0n1 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.863 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.167 nvme0n1 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.167 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.425 18:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 nvme0n1 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:53.425 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.426 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.684 nvme0n1 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.684 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.943 nvme0n1 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.943 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.202 nvme0n1 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.202 18:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.768 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.027 nvme0n1 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.027 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.028 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.286 nvme0n1 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.286 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.287 18:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.545 nvme0n1 00:28:55.545 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.545 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.545 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.546 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 nvme0n1 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.805 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.064 nvme0n1 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.064 18:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.966 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.225 nvme0n1 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.225 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.226 18:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.485 nvme0n1 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.485 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.744 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.003 nvme0n1 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.003 18:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.262 nvme0n1 00:28:59.262 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.262 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.262 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.262 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.262 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.519 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.777 nvme0n1 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.777 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.344 nvme0n1 00:29:00.344 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.344 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.344 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.344 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.344 18:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.344 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 nvme0n1 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.911 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.169 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.170 18:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.737 nvme0n1 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.737 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 nvme0n1 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.305 18:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.871 nvme0n1 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:02.871 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.872 nvme0n1 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:02.872 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:03.130 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.130 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.130 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.130 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.130 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 nvme0n1 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.131 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 nvme0n1 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.390 18:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 nvme0n1 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 nvme0n1 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.663 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 nvme0n1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 nvme0n1 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.936 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 nvme0n1 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:04.196 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.197 18:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.456 nvme0n1 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.456 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.716 nvme0n1 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.716 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.975 nvme0n1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.975 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.234 nvme0n1 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.234 18:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.493 nvme0n1 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.493 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.494 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.753 nvme0n1 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.753 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.012 nvme0n1 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.012 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.271 nvme0n1 00:29:06.271 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.271 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.271 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.272 18:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:06.272 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.531 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.531 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 nvme0n1 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.790 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.049 nvme0n1 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.049 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.308 18:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.567 nvme0n1 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:07.567 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.568 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 nvme0n1 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.827 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.086 18:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.653 nvme0n1 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:08.653 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:08.654 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.654 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.654 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 nvme0n1 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:09.221 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:09.222 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:09.222 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.222 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.222 18:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.789 nvme0n1 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.789 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.357 nvme0n1 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.357 18:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.357 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.925 nvme0n1 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:10.925 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:10.926 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.926 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.926 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.926 nvme0n1 00:29:10.926 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 nvme0n1 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:11.185 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.186 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:11.186 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.186 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.445 18:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.445 nvme0n1 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.445 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 nvme0n1 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 nvme0n1 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.710 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 nvme0n1 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.970 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.971 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 nvme0n1 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.230 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.489 nvme0n1 00:29:12.489 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.489 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.489 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.489 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.489 18:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:12.489 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.490 nvme0n1 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.490 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.749 nvme0n1 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:12.749 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.750 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 nvme0n1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.009 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 nvme0n1 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.268 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.269 18:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 nvme0n1 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.528 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.529 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.789 nvme0n1 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.789 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.790 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.049 nvme0n1 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.049 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.050 18:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.309 nvme0n1 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.309 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:14.567 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.568 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.826 nvme0n1 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.826 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 nvme0n1 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.344 18:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.603 nvme0n1 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.603 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.862 nvme0n1 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.862 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.121 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM0MTJhNTM3MmE3YzI2NWU3NTE4YzgzMWY3NTVkOTK4UFCC: 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: ]] 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxNTIzM2VmY2QwNWQwOGM2NTMzNzk1MDg3YWQyMGNmZjE0NDNhZTIyYTJmNjVmMDliZWZiZGUxMTUyZWUwY8MHpZ8=: 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.122 18:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.690 nvme0n1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.690 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.258 nvme0n1 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.258 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.259 18:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.826 nvme0n1 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.826 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU2ODA3NWE3ZjJlMjEzMDkxNzdiYThlNmFmNWQ4YzAwZjZiNGI2YTQxZTg2MDRh0h2nhA==: 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: ]] 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNiMTVhN2EwZDc4NzljNDlhNzkzMmMyZDNjNWRlZjEWPt1Z: 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 nvme0n1 00:29:18.394 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.394 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.394 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.394 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 18:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTliNTZjNGU4NmRiMjlkNTU2YzE2ZGQ4N2U3MDdkZWQ0MTQyNzExNDYwYmM0NzBjNzQ5ZTRlYjBjZDZkYTdlZpnYF7A=: 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.394 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.395 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.962 nvme0n1 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:18.962 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.963 2024/12/14 18:49:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:18.963 request: 00:29:18.963 { 00:29:18.963 "method": "bdev_nvme_attach_controller", 00:29:18.963 "params": { 00:29:18.963 "name": "nvme0", 00:29:18.963 "trtype": "tcp", 00:29:18.963 "traddr": "10.0.0.1", 00:29:18.963 "adrfam": "ipv4", 00:29:18.963 "trsvcid": "4420", 00:29:18.963 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:18.963 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:18.963 "prchk_reftag": false, 00:29:18.963 "prchk_guard": false, 00:29:18.963 "hdgst": false, 00:29:18.963 "ddgst": false, 00:29:18.963 "allow_unrecognized_csi": false 00:29:18.963 } 00:29:18.963 } 00:29:18.963 Got JSON-RPC error response 00:29:18.963 GoRPCClient: error on JSON-RPC call 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.963 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:19.222 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.222 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:19.222 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.222 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.222 2024/12/14 18:49:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:19.222 request: 00:29:19.222 { 00:29:19.222 "method": "bdev_nvme_attach_controller", 00:29:19.222 "params": { 00:29:19.222 "name": "nvme0", 00:29:19.222 "trtype": "tcp", 00:29:19.222 "traddr": "10.0.0.1", 00:29:19.222 "adrfam": "ipv4", 00:29:19.222 "trsvcid": "4420", 00:29:19.222 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:19.222 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:19.222 "prchk_reftag": false, 00:29:19.222 "prchk_guard": false, 00:29:19.222 "hdgst": false, 00:29:19.222 "ddgst": false, 00:29:19.222 "dhchap_key": "key2", 00:29:19.222 "allow_unrecognized_csi": false 00:29:19.222 } 00:29:19.222 } 00:29:19.222 Got JSON-RPC error response 00:29:19.222 GoRPCClient: error on JSON-RPC call 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.223 2024/12/14 18:49:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:19.223 request: 00:29:19.223 { 00:29:19.223 "method": "bdev_nvme_attach_controller", 00:29:19.223 "params": { 00:29:19.223 "name": "nvme0", 00:29:19.223 "trtype": "tcp", 00:29:19.223 "traddr": "10.0.0.1", 00:29:19.223 "adrfam": "ipv4", 00:29:19.223 "trsvcid": "4420", 00:29:19.223 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:19.223 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:19.223 "prchk_reftag": false, 00:29:19.223 "prchk_guard": false, 00:29:19.223 "hdgst": false, 00:29:19.223 "ddgst": false, 00:29:19.223 "dhchap_key": "key1", 00:29:19.223 "dhchap_ctrlr_key": "ckey2", 00:29:19.223 "allow_unrecognized_csi": false 00:29:19.223 } 00:29:19.223 } 00:29:19.223 Got JSON-RPC error response 00:29:19.223 GoRPCClient: error on JSON-RPC call 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.223 nvme0n1 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.223 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:19.482 18:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.482 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.483 2024/12/14 18:49:35 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:29:19.483 request: 00:29:19.483 { 00:29:19.483 "method": "bdev_nvme_set_keys", 00:29:19.483 "params": { 00:29:19.483 "name": "nvme0", 00:29:19.483 "dhchap_key": "key1", 00:29:19.483 "dhchap_ctrlr_key": "ckey2" 00:29:19.483 } 00:29:19.483 } 00:29:19.483 Got JSON-RPC error response 00:29:19.483 GoRPCClient: error on JSON-RPC call 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:19.483 18:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:20.418 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxMTY2YTdjY2VkMWJlNmEzODZmZWIxZjNiN2NiNjVkMTVkYzJmMzcwYTBmZWYwS5QS3w==: 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: ]] 00:29:20.419 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDcxZTExNDdmZGM2NmY1OGU1ODdjMTllMzZjZTkwNzVkNTk0MDhjMTY1NDhjMTk2kMJipQ==: 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.678 nvme0n1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGQwMGE1NTQ1NDY2Y2E5YzU0NmI4Njc4NzRhZWExY2EXLn2z: 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjZWQzOTk2ODMwM2EzODYxOTlmMTA1N2FjZDUwOTDaFv8i: 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.678 2024/12/14 18:49:36 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:29:20.678 request: 00:29:20.678 { 00:29:20.678 "method": "bdev_nvme_set_keys", 00:29:20.678 "params": { 00:29:20.678 "name": "nvme0", 00:29:20.678 "dhchap_key": "key2", 00:29:20.678 "dhchap_ctrlr_key": "ckey1" 00:29:20.678 } 00:29:20.678 } 00:29:20.678 Got JSON-RPC error response 00:29:20.678 GoRPCClient: error on JSON-RPC call 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:20.678 18:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:21.613 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.613 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:21.613 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.613 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.872 rmmod nvme_tcp 00:29:21.872 rmmod nvme_fabrics 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 112798 ']' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 112798 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 112798 ']' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 112798 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112798 00:29:21.872 killing process with pid 112798 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112798' 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 112798 00:29:21.872 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 112798 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:22.131 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:29:22.390 18:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:22.390 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:22.390 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:22.390 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:22.390 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:29:22.390 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:29:22.391 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:23.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:23.327 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:23.327 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:23.327 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HbI /tmp/spdk.key-null.rqF /tmp/spdk.key-sha256.fVy /tmp/spdk.key-sha384.PuV /tmp/spdk.key-sha512.05P /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:29:23.327 18:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:23.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:23.585 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:23.585 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:23.844 00:29:23.844 real 0m35.845s 00:29:23.844 user 0m32.842s 00:29:23.844 sys 0m4.039s 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.844 ************************************ 00:29:23.844 END TEST nvmf_auth_host 00:29:23.844 ************************************ 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.844 ************************************ 00:29:23.844 START TEST nvmf_digest 00:29:23.844 ************************************ 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:23.844 * Looking for test storage... 00:29:23.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:29:23.844 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.121 --rc genhtml_branch_coverage=1 00:29:24.121 --rc genhtml_function_coverage=1 00:29:24.121 --rc genhtml_legend=1 00:29:24.121 --rc geninfo_all_blocks=1 00:29:24.121 --rc geninfo_unexecuted_blocks=1 00:29:24.121 00:29:24.121 ' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.121 --rc genhtml_branch_coverage=1 00:29:24.121 --rc genhtml_function_coverage=1 00:29:24.121 --rc genhtml_legend=1 00:29:24.121 --rc geninfo_all_blocks=1 00:29:24.121 --rc geninfo_unexecuted_blocks=1 00:29:24.121 00:29:24.121 ' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.121 --rc genhtml_branch_coverage=1 00:29:24.121 --rc genhtml_function_coverage=1 00:29:24.121 --rc genhtml_legend=1 00:29:24.121 --rc geninfo_all_blocks=1 00:29:24.121 --rc geninfo_unexecuted_blocks=1 00:29:24.121 00:29:24.121 ' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.121 --rc genhtml_branch_coverage=1 00:29:24.121 --rc genhtml_function_coverage=1 00:29:24.121 --rc genhtml_legend=1 00:29:24.121 --rc geninfo_all_blocks=1 00:29:24.121 --rc geninfo_unexecuted_blocks=1 00:29:24.121 00:29:24.121 ' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.121 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.122 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:24.122 Cannot find device "nvmf_init_br" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:24.122 Cannot find device "nvmf_init_br2" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:24.122 Cannot find device "nvmf_tgt_br" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:24.122 Cannot find device "nvmf_tgt_br2" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:24.122 Cannot find device "nvmf_init_br" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:24.122 Cannot find device "nvmf_init_br2" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:24.122 Cannot find device "nvmf_tgt_br" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:24.122 Cannot find device "nvmf_tgt_br2" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:24.122 Cannot find device "nvmf_br" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:24.122 Cannot find device "nvmf_init_if" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:24.122 Cannot find device "nvmf_init_if2" 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:24.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:24.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:24.122 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:24.447 18:49:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:24.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:24.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:29:24.447 00:29:24.447 --- 10.0.0.3 ping statistics --- 00:29:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.447 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:24.447 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:24.447 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:29:24.447 00:29:24.447 --- 10.0.0.4 ping statistics --- 00:29:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.447 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:24.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:29:24.447 00:29:24.447 --- 10.0.0.1 ping statistics --- 00:29:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.447 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:24.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:29:24.447 00:29:24.447 --- 10.0.0.2 ping statistics --- 00:29:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.447 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.447 ************************************ 00:29:24.447 START TEST nvmf_digest_clean 00:29:24.447 ************************************ 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=114454 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 114454 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114454 ']' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.447 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.447 [2024-12-14 18:49:40.169858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:24.447 [2024-12-14 18:49:40.169947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.706 [2024-12-14 18:49:40.312786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.706 [2024-12-14 18:49:40.392580] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.706 [2024-12-14 18:49:40.392670] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.706 [2024-12-14 18:49:40.392685] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.706 [2024-12-14 18:49:40.392696] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.706 [2024-12-14 18:49:40.392706] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.706 [2024-12-14 18:49:40.392742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.706 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.706 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:24.706 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:24.706 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.706 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.965 null0 00:29:24.965 [2024-12-14 18:49:40.603831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.965 [2024-12-14 18:49:40.627982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=114485 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 114485 /var/tmp/bperf.sock 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114485 ']' 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.965 18:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.965 [2024-12-14 18:49:40.700737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:24.965 [2024-12-14 18:49:40.700826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114485 ] 00:29:25.224 [2024-12-14 18:49:40.841535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.224 [2024-12-14 18:49:40.939931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.161 18:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.161 18:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:26.161 18:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:26.161 18:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:26.161 18:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:26.420 18:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.420 18:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.679 nvme0n1 00:29:26.679 18:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:26.679 18:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.938 Running I/O for 2 seconds... 00:29:28.810 22936.00 IOPS, 89.59 MiB/s [2024-12-14T18:49:44.563Z] 22804.00 IOPS, 89.08 MiB/s 00:29:28.810 Latency(us) 00:29:28.810 [2024-12-14T18:49:44.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.810 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:28.810 nvme0n1 : 2.00 22815.65 89.12 0.00 0.00 5605.58 2815.07 13941.29 00:29:28.810 [2024-12-14T18:49:44.563Z] =================================================================================================================== 00:29:28.810 [2024-12-14T18:49:44.563Z] Total : 22815.65 89.12 0.00 0.00 5605.58 2815.07 13941.29 00:29:28.810 { 00:29:28.810 "results": [ 00:29:28.810 { 00:29:28.810 "job": "nvme0n1", 00:29:28.810 "core_mask": "0x2", 00:29:28.810 "workload": "randread", 00:29:28.810 "status": "finished", 00:29:28.810 "queue_depth": 128, 00:29:28.810 "io_size": 4096, 00:29:28.810 "runtime": 2.004589, 00:29:28.810 "iops": 22815.649492240056, 00:29:28.810 "mibps": 89.12363082906272, 00:29:28.810 "io_failed": 0, 00:29:28.810 "io_timeout": 0, 00:29:28.810 "avg_latency_us": 5605.577943931178, 00:29:28.810 "min_latency_us": 2815.069090909091, 00:29:28.810 "max_latency_us": 13941.294545454546 00:29:28.810 } 00:29:28.810 ], 00:29:28.810 "core_count": 1 00:29:28.810 } 00:29:28.810 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:28.810 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:28.810 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:28.810 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:28.810 | select(.opcode=="crc32c") 00:29:28.810 | "\(.module_name) \(.executed)"' 00:29:28.810 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 114485 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114485 ']' 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114485 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114485 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.069 killing process with pid 114485 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114485' 00:29:29.069 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.069 00:29:29.069 Latency(us) 00:29:29.069 [2024-12-14T18:49:44.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.069 [2024-12-14T18:49:44.822Z] =================================================================================================================== 00:29:29.069 [2024-12-14T18:49:44.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114485 00:29:29.069 18:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114485 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=114580 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 114580 /var/tmp/bperf.sock 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114580 ']' 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:29.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:29.328 18:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.586 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.586 Zero copy mechanism will not be used. 00:29:29.586 [2024-12-14 18:49:45.120970] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:29.586 [2024-12-14 18:49:45.121079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114580 ] 00:29:29.586 [2024-12-14 18:49:45.253121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.586 [2024-12-14 18:49:45.324469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.522 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.522 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:30.522 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:30.522 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:30.522 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:30.781 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.781 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.040 nvme0n1 00:29:31.298 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:31.298 18:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.298 Zero copy mechanism will not be used. 00:29:31.298 Running I/O for 2 seconds... 00:29:33.612 9386.00 IOPS, 1173.25 MiB/s [2024-12-14T18:49:49.365Z] 9577.50 IOPS, 1197.19 MiB/s 00:29:33.612 Latency(us) 00:29:33.612 [2024-12-14T18:49:49.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:33.612 nvme0n1 : 2.00 9575.67 1196.96 0.00 0.00 1668.04 513.86 10664.49 00:29:33.612 [2024-12-14T18:49:49.365Z] =================================================================================================================== 00:29:33.612 [2024-12-14T18:49:49.365Z] Total : 9575.67 1196.96 0.00 0.00 1668.04 513.86 10664.49 00:29:33.612 { 00:29:33.612 "results": [ 00:29:33.612 { 00:29:33.612 "job": "nvme0n1", 00:29:33.612 "core_mask": "0x2", 00:29:33.612 "workload": "randread", 00:29:33.612 "status": "finished", 00:29:33.612 "queue_depth": 16, 00:29:33.612 "io_size": 131072, 00:29:33.612 "runtime": 2.002053, 00:29:33.612 "iops": 9575.670574155629, 00:29:33.612 "mibps": 1196.9588217694536, 00:29:33.612 "io_failed": 0, 00:29:33.612 "io_timeout": 0, 00:29:33.612 "avg_latency_us": 1668.037395118574, 00:29:33.612 "min_latency_us": 513.8618181818182, 00:29:33.612 "max_latency_us": 10664.494545454545 00:29:33.612 } 00:29:33.612 ], 00:29:33.612 "core_count": 1 00:29:33.612 } 00:29:33.612 18:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.612 18:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:33.612 18:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.612 18:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.612 | select(.opcode=="crc32c") 00:29:33.612 | "\(.module_name) \(.executed)"' 00:29:33.612 18:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 114580 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114580 ']' 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114580 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114580 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:33.612 killing process with pid 114580 00:29:33.612 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.612 00:29:33.612 Latency(us) 00:29:33.612 [2024-12-14T18:49:49.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.612 [2024-12-14T18:49:49.365Z] =================================================================================================================== 00:29:33.612 [2024-12-14T18:49:49.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114580' 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114580 00:29:33.612 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114580 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=114666 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 114666 /var/tmp/bperf.sock 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114666 ']' 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.871 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.872 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.872 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.872 18:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.872 [2024-12-14 18:49:49.581346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:33.872 [2024-12-14 18:49:49.581444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114666 ] 00:29:34.130 [2024-12-14 18:49:49.718945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.131 [2024-12-14 18:49:49.792733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.069 18:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.069 18:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:35.069 18:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:35.069 18:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:35.069 18:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:35.328 18:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.328 18:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.586 nvme0n1 00:29:35.586 18:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:35.586 18:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.845 Running I/O for 2 seconds... 00:29:37.716 26859.00 IOPS, 104.92 MiB/s [2024-12-14T18:49:53.469Z] 26959.50 IOPS, 105.31 MiB/s 00:29:37.716 Latency(us) 00:29:37.716 [2024-12-14T18:49:53.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.717 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.717 nvme0n1 : 2.01 26933.81 105.21 0.00 0.00 4746.62 2398.02 8877.15 00:29:37.717 [2024-12-14T18:49:53.470Z] =================================================================================================================== 00:29:37.717 [2024-12-14T18:49:53.470Z] Total : 26933.81 105.21 0.00 0.00 4746.62 2398.02 8877.15 00:29:37.975 { 00:29:37.975 "results": [ 00:29:37.975 { 00:29:37.975 "job": "nvme0n1", 00:29:37.975 "core_mask": "0x2", 00:29:37.975 "workload": "randwrite", 00:29:37.975 "status": "finished", 00:29:37.975 "queue_depth": 128, 00:29:37.975 "io_size": 4096, 00:29:37.975 "runtime": 2.00666, 00:29:37.975 "iops": 26933.81041133027, 00:29:37.975 "mibps": 105.21019691925886, 00:29:37.975 "io_failed": 0, 00:29:37.975 "io_timeout": 0, 00:29:37.975 "avg_latency_us": 4746.619661473095, 00:29:37.975 "min_latency_us": 2398.021818181818, 00:29:37.975 "max_latency_us": 8877.149090909092 00:29:37.975 } 00:29:37.975 ], 00:29:37.975 "core_count": 1 00:29:37.975 } 00:29:37.975 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:37.975 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:37.975 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:37.975 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:37.975 | select(.opcode=="crc32c") 00:29:37.975 | "\(.module_name) \(.executed)"' 00:29:37.975 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 114666 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114666 ']' 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114666 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114666 00:29:38.234 killing process with pid 114666 00:29:38.234 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.234 00:29:38.234 Latency(us) 00:29:38.234 [2024-12-14T18:49:53.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.234 [2024-12-14T18:49:53.987Z] =================================================================================================================== 00:29:38.234 [2024-12-14T18:49:53.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114666' 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114666 00:29:38.234 18:49:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114666 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=114752 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 114752 /var/tmp/bperf.sock 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114752 ']' 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.493 18:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.493 [2024-12-14 18:49:54.097788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:38.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.493 Zero copy mechanism will not be used. 00:29:38.493 [2024-12-14 18:49:54.097881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114752 ] 00:29:38.493 [2024-12-14 18:49:54.238862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.752 [2024-12-14 18:49:54.316109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.687 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.687 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:39.687 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:39.687 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:39.687 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.945 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.945 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.204 nvme0n1 00:29:40.204 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:40.204 18:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.204 Zero copy mechanism will not be used. 00:29:40.204 Running I/O for 2 seconds... 00:29:42.516 7410.00 IOPS, 926.25 MiB/s [2024-12-14T18:49:58.269Z] 7528.00 IOPS, 941.00 MiB/s 00:29:42.516 Latency(us) 00:29:42.516 [2024-12-14T18:49:58.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:42.516 nvme0n1 : 2.00 7524.12 940.51 0.00 0.00 2122.15 1549.03 10128.29 00:29:42.516 [2024-12-14T18:49:58.269Z] =================================================================================================================== 00:29:42.516 [2024-12-14T18:49:58.269Z] Total : 7524.12 940.51 0.00 0.00 2122.15 1549.03 10128.29 00:29:42.516 { 00:29:42.516 "results": [ 00:29:42.516 { 00:29:42.516 "job": "nvme0n1", 00:29:42.516 "core_mask": "0x2", 00:29:42.516 "workload": "randwrite", 00:29:42.516 "status": "finished", 00:29:42.516 "queue_depth": 16, 00:29:42.516 "io_size": 131072, 00:29:42.516 "runtime": 2.003159, 00:29:42.516 "iops": 7524.11565931611, 00:29:42.516 "mibps": 940.5144574145138, 00:29:42.516 "io_failed": 0, 00:29:42.516 "io_timeout": 0, 00:29:42.516 "avg_latency_us": 2122.1466126230457, 00:29:42.516 "min_latency_us": 1549.0327272727272, 00:29:42.516 "max_latency_us": 10128.290909090909 00:29:42.516 } 00:29:42.516 ], 00:29:42.516 "core_count": 1 00:29:42.516 } 00:29:42.516 18:49:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:42.516 18:49:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:42.516 18:49:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:42.516 18:49:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:42.516 | select(.opcode=="crc32c") 00:29:42.516 | "\(.module_name) \(.executed)"' 00:29:42.516 18:49:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 114752 00:29:42.516 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114752 ']' 00:29:42.517 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114752 00:29:42.517 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:42.517 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.517 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114752 00:29:42.776 killing process with pid 114752 00:29:42.776 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.776 00:29:42.776 Latency(us) 00:29:42.776 [2024-12-14T18:49:58.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.776 [2024-12-14T18:49:58.529Z] =================================================================================================================== 00:29:42.776 [2024-12-14T18:49:58.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.776 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:42.776 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:42.776 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114752' 00:29:42.776 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114752 00:29:42.776 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114752 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 114454 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114454 ']' 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114454 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114454 00:29:43.034 killing process with pid 114454 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114454' 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114454 00:29:43.034 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114454 00:29:43.293 00:29:43.293 real 0m18.746s 00:29:43.293 user 0m36.456s 00:29:43.293 sys 0m4.824s 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:43.293 ************************************ 00:29:43.293 END TEST nvmf_digest_clean 00:29:43.293 ************************************ 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:43.293 ************************************ 00:29:43.293 START TEST nvmf_digest_error 00:29:43.293 ************************************ 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=114871 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 114871 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 114871 ']' 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.293 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.294 18:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.294 [2024-12-14 18:49:58.971132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:43.294 [2024-12-14 18:49:58.971251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.552 [2024-12-14 18:49:59.112112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.552 [2024-12-14 18:49:59.174469] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.552 [2024-12-14 18:49:59.174546] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.552 [2024-12-14 18:49:59.174566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.552 [2024-12-14 18:49:59.174574] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.552 [2024-12-14 18:49:59.174580] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.552 [2024-12-14 18:49:59.174618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.552 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.553 [2024-12-14 18:49:59.259096] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:43.553 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.553 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:43.553 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:43.553 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.553 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.812 null0 00:29:43.812 [2024-12-14 18:49:59.402979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.812 [2024-12-14 18:49:59.427158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=114906 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 114906 /var/tmp/bperf.sock 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 114906 ']' 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:43.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.812 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.812 [2024-12-14 18:49:59.489213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:43.812 [2024-12-14 18:49:59.489331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114906 ] 00:29:44.070 [2024-12-14 18:49:59.615863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.070 [2024-12-14 18:49:59.687176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.328 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.328 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:44.328 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.328 18:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.328 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.895 nvme0n1 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:44.895 18:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.895 Running I/O for 2 seconds... 00:29:44.895 [2024-12-14 18:50:00.529869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.529918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.529930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.539686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.539720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.539732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.550973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.551042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.551054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.561835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.561869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.561880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.573422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.573455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.573466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.584331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.584364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.584374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.595092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.595124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.595135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.606528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.606560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.606572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.616001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.616034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.616046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.627195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.627229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.627240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.895 [2024-12-14 18:50:00.638024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:44.895 [2024-12-14 18:50:00.638056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.895 [2024-12-14 18:50:00.638067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.649500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.649543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.649564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.660983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.661027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.661048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.671929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.671967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.671978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.681244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.681301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.681313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.693300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.693340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.693352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.704938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.704979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.705002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.715710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.715751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.715762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.726521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.726562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.735739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.735771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.735783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.746920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.746961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.758671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.758703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.758714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.770190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.770231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.770242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.780715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.780777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.790416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.790456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.790467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.802064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.802097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.813625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.813677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.813687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.824441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.824474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.824486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.835429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.835462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.835472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.846346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.846380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.846390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.857180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.857213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.857224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.867124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.867157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.867167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.878261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.878301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.878312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.889082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.889115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.889126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.900299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.900338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.900349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.180 [2024-12-14 18:50:00.911544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.180 [2024-12-14 18:50:00.911576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.180 [2024-12-14 18:50:00.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.448 [2024-12-14 18:50:00.921112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.448 [2024-12-14 18:50:00.921153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.448 [2024-12-14 18:50:00.921163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.448 [2024-12-14 18:50:00.933009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.448 [2024-12-14 18:50:00.933043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.933054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.943584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.943626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.943637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.954898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.954939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.954950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.964861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.964893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.964903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.976572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.976612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.976625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.987483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.987537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:00.997984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:00.998016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:00.998028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.008061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.008104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.019505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.019537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.019548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.030488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.030520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.030531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.041711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.041744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.041755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.052158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.052190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.063976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.064010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.064021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.074544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.074576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.083856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.083888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.083898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.094550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.106128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.106159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.106171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.117831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.117873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.117884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.128384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.128416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.128427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.139392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.139435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.149606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.149647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.149658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.162167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.162200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.162212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.172143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.172175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.172186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.182959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.182996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.183008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.449 [2024-12-14 18:50:01.193316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.449 [2024-12-14 18:50:01.193348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.449 [2024-12-14 18:50:01.193359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.205227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.205260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.205271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.215436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.215468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.215479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.226817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.226849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.226860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.237726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.237759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.237770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.246960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.247000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.247012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.258593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.258636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.258647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.269118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.269150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.269161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.279744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.709 [2024-12-14 18:50:01.279776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.709 [2024-12-14 18:50:01.279788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.709 [2024-12-14 18:50:01.291429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.291462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.291473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.302574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.302614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.302626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.313256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.313288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.324628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.324659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.324670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.334607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.334637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.334648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.344278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.344309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.344320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.355731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.355773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.366492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.366524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.366535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.377428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.377461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.377472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.389449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.389482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.389493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.399070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.399103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.399114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.409812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.409854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.421426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.421459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.421470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.432099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.432132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.432142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.443753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.443785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.443796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.710 [2024-12-14 18:50:01.454242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.710 [2024-12-14 18:50:01.454284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.710 [2024-12-14 18:50:01.454295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.466012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.466056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.476949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.476982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.476993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.487915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.487947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.487959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.498553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.498586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.498607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.509236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.509269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 23323.00 IOPS, 91.11 MiB/s [2024-12-14T18:50:01.723Z] [2024-12-14 18:50:01.521078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.521109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.521120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.530188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.530230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.541306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.541337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.541348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.552760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.552792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.552803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.562010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.562043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.562054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.573393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.573425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.573437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.584278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.584311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.584321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.594544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.594576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.594587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.606930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.606962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.618782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.618813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.618824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.629715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.629757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.629768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.640233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.640266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.652115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.652148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.652159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.662895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.662928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.662939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.672425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.672457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.672468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.683789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.683822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.683832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.694466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.694505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.694516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.705638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.705676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.705687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.970 [2024-12-14 18:50:01.717455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:45.970 [2024-12-14 18:50:01.717488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.970 [2024-12-14 18:50:01.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.728181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.728212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.728223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.737411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.737444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.737455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.749478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.749510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.749521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.760574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.760616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.770645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.770677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.770688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.782370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.782402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.782413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.793438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.793470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.793481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.804527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.229 [2024-12-14 18:50:01.804569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.229 [2024-12-14 18:50:01.804590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.229 [2024-12-14 18:50:01.815180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.815213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.815224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.824230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.824272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.824282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.835431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.835463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.835474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.846332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.846366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.846377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.857063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.857096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.857106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.868321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.868354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.868365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.879397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.879430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.879441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.890395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.890427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.890438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.900161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.900192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.900203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.910472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.910503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.910515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.921767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.921810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.931490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.931522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.931533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.943593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.943644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.954350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.954381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.954392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.964686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.964717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.964728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.230 [2024-12-14 18:50:01.975342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.230 [2024-12-14 18:50:01.975375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.230 [2024-12-14 18:50:01.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:01.986696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:01.986729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:01.986740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:01.996987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:01.997019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:01.997030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:02.007317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:02.007375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:02.007386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:02.019272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:02.019305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:02.019319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:02.029956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:02.029987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:02.029998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:02.041571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.488 [2024-12-14 18:50:02.041622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.488 [2024-12-14 18:50:02.041635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.488 [2024-12-14 18:50:02.053737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.053769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.053780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.063676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.063719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.063730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.074132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.074164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.074176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.085471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.085502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.085514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.096571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.096620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.096644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.107950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.107991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.108003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.117539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.117572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.117582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.128186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.128218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.128229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.139310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.139350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.139361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.149176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.149208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.149219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.161145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.161176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.161186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.171781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.171814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.171825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.183051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.183086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.183098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.193958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.194002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.194013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.204529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.204572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.204584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.215241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.215274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.215285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.226896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.226928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.226940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.489 [2024-12-14 18:50:02.237729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.489 [2024-12-14 18:50:02.237767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.489 [2024-12-14 18:50:02.237778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.247456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.247489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.247499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.258079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.258110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.258121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.269841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.269872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.269882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.280447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.280479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.280490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.290735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.290767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.290778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.302176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.302218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.302230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.313261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.313304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.313315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.323949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.323991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.332744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.332776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.332786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.344025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.344057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.344068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.355115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.355148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.355159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.366425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.366470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.377749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.377780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.388465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.388497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.388507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.399875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.399909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.411224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.411256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.411268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.421886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.421917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.421928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.431531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.431563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.748 [2024-12-14 18:50:02.431574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.748 [2024-12-14 18:50:02.442255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.748 [2024-12-14 18:50:02.442287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.442298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.749 [2024-12-14 18:50:02.452743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.749 [2024-12-14 18:50:02.452775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.452787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.749 [2024-12-14 18:50:02.463552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.749 [2024-12-14 18:50:02.463593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.463615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.749 [2024-12-14 18:50:02.474303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.749 [2024-12-14 18:50:02.474334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.474345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.749 [2024-12-14 18:50:02.485842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.749 [2024-12-14 18:50:02.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.485885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.749 [2024-12-14 18:50:02.497418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:46.749 [2024-12-14 18:50:02.497452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.749 [2024-12-14 18:50:02.497463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.007 [2024-12-14 18:50:02.505815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:47.007 [2024-12-14 18:50:02.505856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.007 [2024-12-14 18:50:02.505868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.007 23418.50 IOPS, 91.48 MiB/s [2024-12-14T18:50:02.760Z] [2024-12-14 18:50:02.517867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6e75f0) 00:29:47.007 [2024-12-14 18:50:02.517898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.008 [2024-12-14 18:50:02.517909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.008 00:29:47.008 Latency(us) 00:29:47.008 [2024-12-14T18:50:02.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.008 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:47.008 nvme0n1 : 2.00 23429.09 91.52 0.00 0.00 5456.79 2815.07 14417.92 00:29:47.008 [2024-12-14T18:50:02.761Z] =================================================================================================================== 00:29:47.008 [2024-12-14T18:50:02.761Z] Total : 23429.09 91.52 0.00 0.00 5456.79 2815.07 14417.92 00:29:47.008 { 00:29:47.008 "results": [ 00:29:47.008 { 00:29:47.008 "job": "nvme0n1", 00:29:47.008 "core_mask": "0x2", 00:29:47.008 "workload": "randread", 00:29:47.008 "status": "finished", 00:29:47.008 "queue_depth": 128, 00:29:47.008 "io_size": 4096, 00:29:47.008 "runtime": 2.004559, 00:29:47.008 "iops": 23429.093381636558, 00:29:47.008 "mibps": 91.5198960220178, 00:29:47.008 "io_failed": 0, 00:29:47.008 "io_timeout": 0, 00:29:47.008 "avg_latency_us": 5456.790124328561, 00:29:47.008 "min_latency_us": 2815.069090909091, 00:29:47.008 "max_latency_us": 14417.92 00:29:47.008 } 00:29:47.008 ], 00:29:47.008 "core_count": 1 00:29:47.008 } 00:29:47.008 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.008 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.008 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:47.008 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.008 | .driver_specific 00:29:47.008 | .nvme_error 00:29:47.008 | .status_code 00:29:47.008 | .command_transient_transport_error' 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 184 > 0 )) 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 114906 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 114906 ']' 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 114906 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114906 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:47.266 killing process with pid 114906 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114906' 00:29:47.266 Received shutdown signal, test time was about 2.000000 seconds 00:29:47.266 00:29:47.266 Latency(us) 00:29:47.266 [2024-12-14T18:50:03.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.266 [2024-12-14T18:50:03.019Z] =================================================================================================================== 00:29:47.266 [2024-12-14T18:50:03.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 114906 00:29:47.266 18:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 114906 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=114974 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 114974 /var/tmp/bperf.sock 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 114974 ']' 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.525 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:47.525 Zero copy mechanism will not be used. 00:29:47.525 [2024-12-14 18:50:03.121664] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:47.525 [2024-12-14 18:50:03.121775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114974 ] 00:29:47.525 [2024-12-14 18:50:03.247165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.783 [2024-12-14 18:50:03.304013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.783 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.783 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:47.783 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.783 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.041 18:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.299 nvme0n1 00:29:48.299 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:48.299 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.299 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.559 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.559 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:48.559 18:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:48.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:48.559 Zero copy mechanism will not be used. 00:29:48.559 Running I/O for 2 seconds... 00:29:48.559 [2024-12-14 18:50:04.165670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.165736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.169926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.169962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.173394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.173428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.173440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.175932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.175964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.175975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.179185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.179220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.179231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.181766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.181798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.185150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.185183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.188641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.188674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.188685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.191307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.191340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.191352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.194487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.194531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.197581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.197622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.197634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.201102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.201135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.201147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.204070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.204103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.204114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.206850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.206882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.206893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.210151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.210195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.212909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.212952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.559 [2024-12-14 18:50:04.212963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.559 [2024-12-14 18:50:04.216338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.559 [2024-12-14 18:50:04.216372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.216382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.219328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.219361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.219373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.222680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.222712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.222723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.225484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.225517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.225529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.228666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.228698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.228709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.231805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.231838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.231849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.234719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.234752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.234763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.237300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.237331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.237341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.240685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.240727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.244637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.244680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.248499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.248543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.250644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.250681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.250692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.254768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.254801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.254813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.257338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.257371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.257381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.260570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.260613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.260626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.263862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.263895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.267232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.267265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.267276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.269900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.269931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.269942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.273921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.273954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.273965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.277707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.277749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.277761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.279926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.279958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.279969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.283362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.283395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.283407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.286516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.286547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.286558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.289983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.290015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.290027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.292767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.292800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.292811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.295924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.295956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.295967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.298876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.298908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.298919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.302198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.302231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.560 [2024-12-14 18:50:04.302242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.560 [2024-12-14 18:50:04.305262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.560 [2024-12-14 18:50:04.305294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.561 [2024-12-14 18:50:04.305305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.561 [2024-12-14 18:50:04.308666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.561 [2024-12-14 18:50:04.308698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.561 [2024-12-14 18:50:04.308708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.312171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.312204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.312215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.314856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.314888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.314899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.318106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.318138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.318149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.321175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.321207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.321218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.323979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.324012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.324023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.327421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.327455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.327466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.330232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.330263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.333724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.333755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.333767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.336170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.336203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.336214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.340176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.340206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.821 [2024-12-14 18:50:04.340218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.821 [2024-12-14 18:50:04.344017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.821 [2024-12-14 18:50:04.344051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.344062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.346158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.346187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.346198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.350153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.350198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.350209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.352817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.352861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.352883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.355712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.355744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.355755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.359060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.359093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.359105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.361652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.361693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.361704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.365119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.365151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.365162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.368212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.368244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.368255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.371225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.371257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.371268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.374581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.377376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.377409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.377420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.380684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.380715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.380726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.383971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.384004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.384014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.387196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.387229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.387241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.390468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.390499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.390510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.393542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.393574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.393586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.396190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.396222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.396233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.399597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.399642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.399654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.402585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.402625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.402636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.406122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.406154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.408928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.408960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.408971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.412081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.412113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.412124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.415011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.415042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.415053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.417758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.417789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.417800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.421047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.421079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.421090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.423839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.423872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.423883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.822 [2024-12-14 18:50:04.426925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.822 [2024-12-14 18:50:04.426957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.822 [2024-12-14 18:50:04.426968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.430365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.430397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.430408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.433158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.433201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.436559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.436592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.440004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.440039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.440050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.442437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.442467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.442478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.445918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.445951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.445961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.449268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.449300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.449311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.451662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.451701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.454863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.454895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.454906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.458527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.458558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.458570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.461071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.461102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.461114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.464581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.464622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.464634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.468361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.468395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.468406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.471241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.471274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.471285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.474627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.474657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.474668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.478699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.478731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.478742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.482121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.482156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.482168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.484685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.484713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.484723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.488141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.488173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.488184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.490967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.491021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.491034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.494460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.494493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.494504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.497277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.497309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.497320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.500527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.500558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.500569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.503899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.503942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.506430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.506460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.506471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.509848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.509880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.509891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.512938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.512970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.512981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.823 [2024-12-14 18:50:04.516269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.823 [2024-12-14 18:50:04.516302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.823 [2024-12-14 18:50:04.516312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.519239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.519273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.519284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.522732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.522763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.522774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.525390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.525421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.525433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.528407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.528438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.528449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.531801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.531834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.531845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.535231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.535263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.535274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.537744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.537774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.540854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.540887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.540898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.544441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.544475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.544486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.547301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.547334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.547361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.550004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.550035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.550045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.553388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.553431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.556134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.556177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.556189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.558712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.558743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.558754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.561953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.561985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.561996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.565186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.565218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.565230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.824 [2024-12-14 18:50:04.567960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:48.824 [2024-12-14 18:50:04.567992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.824 [2024-12-14 18:50:04.568004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.571344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.571376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.571388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.575376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.575436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.578075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.578106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.578116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.581257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.581288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.581299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.585177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.585221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.588103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.588136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.588147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.590674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.590703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.590714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.594216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.594247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.594258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.597504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.597537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.597548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.600661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.600693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.600704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.603968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.604009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.604020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.607079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.607123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.607135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.610359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.610400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.085 [2024-12-14 18:50:04.610410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.085 [2024-12-14 18:50:04.613079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.085 [2024-12-14 18:50:04.613111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.613122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.616826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.616879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.619973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.620015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.620027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.623029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.623061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.623073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.626475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.626508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.626519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.629878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.629910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.629922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.632501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.632547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.632559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.636096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.636129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.636140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.639385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.639417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.639428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.642439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.642470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.642481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.645201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.645233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.645244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.648720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.648762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.651255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.651287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.651298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.654423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.654454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.654465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.658199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.658233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.658244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.661675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.661707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.663937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.663968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.663979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.667827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.667869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.670985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.671024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.671038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.673718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.673748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.673759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.676881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.676915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.676926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.680042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.680073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.680084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.682709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.682740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.682751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.686153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.686195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.686206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.689516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.689549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.689560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.692524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.692556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.692566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.695954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.695986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.695997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.698877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.086 [2024-12-14 18:50:04.698908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.086 [2024-12-14 18:50:04.698918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.086 [2024-12-14 18:50:04.701890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.701921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.701931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.705096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.705128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.705140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.707972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.708004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.708015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.710950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.710982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.711002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.714982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.715027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.715038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.718419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.718460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.720911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.720944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.720955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.724789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.724822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.724833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.727336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.727378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.730647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.730690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.730700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.734037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.734080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.736923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.736965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.740231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.740273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.740284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.743795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.743838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.743849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.746260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.746292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.746304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.749477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.749521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.749533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.752239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.752282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.752294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.755547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.755580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.755591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.759679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.759709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.759719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.763277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.763314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.763325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.765846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.765878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.765890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.769625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.769667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.769679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.772276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.772318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.772328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.775653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.775694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.775705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.778968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.779015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.781810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.781854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.785248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.785280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.785291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.788184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.087 [2024-12-14 18:50:04.788238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.087 [2024-12-14 18:50:04.788249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.087 [2024-12-14 18:50:04.790836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.790868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.790879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.794061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.794104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.794115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.797725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.797761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.797772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.800758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.800790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.800800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.803319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.803352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.806655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.806686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.806697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.810335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.810369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.810381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.814067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.814100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.814111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.816764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.816808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.816819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.820512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.820546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.820557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.824751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.824785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.824797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.827441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.827474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.827485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.830761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.830794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.830805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.088 [2024-12-14 18:50:04.834117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.088 [2024-12-14 18:50:04.834150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.088 [2024-12-14 18:50:04.834161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.837342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.837375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.837386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.840874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.840906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.840918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.843440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.843470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.843481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.847241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.847286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.849565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.849611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.849624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.853246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.853278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.853289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.856301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.856332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.856346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.859551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.859595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.859618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.863539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.863570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.863582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.866275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.866306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.866322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.869700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.869732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.869746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.873900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.873932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.873943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.876565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.876616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.876628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.879682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.879726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.879737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.883195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.883229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.883240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.885649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.885678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.885689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.888919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.888952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.349 [2024-12-14 18:50:04.888962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.349 [2024-12-14 18:50:04.892825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.349 [2024-12-14 18:50:04.892862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.892873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.895622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.895652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.895662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.898626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.898659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.898669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.901618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.901648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.901659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.904892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.904935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.904946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.907517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.907549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.907560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.910267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.910297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.910308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.913559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.913592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.916129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.916162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.919766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.919797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.919808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.923346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.923379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.923390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.926080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.926111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.926122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.929243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.932368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.932400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.935450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.935482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.935493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.939059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.939090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.939101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.942287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.942318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.942329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.945544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.945576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.945587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.948414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.948446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.951778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.951822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.955086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.955119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.955129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.958238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.958270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.958280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.960904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.960936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.960947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.964017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.964050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.964061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.967085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.967117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.967128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.969709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.969739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.972988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.973020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.973031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.975952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.350 [2024-12-14 18:50:04.975984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.350 [2024-12-14 18:50:04.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.350 [2024-12-14 18:50:04.978921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.978952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.978963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.982297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.982328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.982339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.985876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.985907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.985918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.989464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.989497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.989508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.991640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.991680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.995478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.995521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.995533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:04.998960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:04.999007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:04.999019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.001522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.001553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.001564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.004975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.005007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.005018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.008364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.008406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.010985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.011052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.011064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.014381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.014424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.017348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.017380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.020678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.020718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.020729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.024014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.024045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.024056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.026938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.026969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.029967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.029999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.030009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.033436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.033468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.033479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.036028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.036060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.036071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.039190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.039223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.039234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.042868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.042901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.042913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.045452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.045485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.045496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.049212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.049245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.049257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.051848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.051880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.051891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.055258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.055292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.059205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.059241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.059253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.062424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.062456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.062468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.065405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.351 [2024-12-14 18:50:05.065438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.351 [2024-12-14 18:50:05.065450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.351 [2024-12-14 18:50:05.068721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.068754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.068766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.072473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.072506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.072517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.075281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.075313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.075325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.078463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.078494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.078505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.082515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.082548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.085917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.085949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.085960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.088482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.088513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.088524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.091300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.091332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.091343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.094504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.094536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.094548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.352 [2024-12-14 18:50:05.097907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.352 [2024-12-14 18:50:05.097940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.352 [2024-12-14 18:50:05.097951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.100570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.100612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.100624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.104569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.104612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.104624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.106881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.106912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.106923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.110037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.110068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.110079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.113210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.113253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.116543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.116574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.116585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.119443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.119476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.122809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.122840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.122851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.125780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.125812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.125823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.128791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.128824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.128834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.132044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.132076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.134921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.134952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.134963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.138180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.138212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.138223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.140651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.140680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.140691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.144473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.144505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.612 [2024-12-14 18:50:05.144516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.612 [2024-12-14 18:50:05.147333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.612 [2024-12-14 18:50:05.147365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.150500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.150532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.150543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.153774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.153819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 9709.00 IOPS, 1213.62 MiB/s [2024-12-14T18:50:05.366Z] [2024-12-14 18:50:05.158040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.158081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.158092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.161348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.161379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.164477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.164509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.164520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.167796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.167830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.167842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.170854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.170887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.174148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.174180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.174191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.177577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.177620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.177632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.180262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.180294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.180304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.183538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.183571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.183582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.186910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.186942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.186953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.189776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.189820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.193449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.193481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.193492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.196621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.196650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.196661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.199326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.199359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.199370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.202635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.202675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.205631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.205662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.205673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.209005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.209037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.209048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.212518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.212550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.212561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.215201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.215244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.215255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.218724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.218756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.218767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.222522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.222554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.222565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.225184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.225217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.225228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.228468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.228502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.228513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.232431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.232464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.613 [2024-12-14 18:50:05.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.613 [2024-12-14 18:50:05.235056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.613 [2024-12-14 18:50:05.235088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.235099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.238374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.238405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.238416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.241315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.241347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.241358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.244333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.244365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.244376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.247370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.247402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.247413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.250845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.250877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.250888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.253278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.253310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.253320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.255819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.255851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.255862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.259228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.259261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.259272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.262414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.262446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.262457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.265675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.265707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.265718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.268450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.268483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.268493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.271639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.271680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.271691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.274768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.274799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.274810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.278172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.278206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.278218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.280783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.280817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.280828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.284662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.284705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.288627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.288659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.288670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.291201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.291235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.291246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.294399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.294430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.294442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.298113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.298156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.300840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.300874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.300885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.303690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.303723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.303734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.307502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.307547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.310527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.310558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.310569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.313109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.313141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.614 [2024-12-14 18:50:05.313152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.614 [2024-12-14 18:50:05.316673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.614 [2024-12-14 18:50:05.316705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.316717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.320534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.320566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.320577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.323333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.323364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.323375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.326857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.326889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.326900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.330662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.330693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.330704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.333245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.333288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.336487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.336519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.339933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.339967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.339978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.342345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.342375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.342386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.345847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.345891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.349816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.349848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.349859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.353378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.353410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.356150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.356182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.356193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.615 [2024-12-14 18:50:05.359744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.615 [2024-12-14 18:50:05.359778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.615 [2024-12-14 18:50:05.359789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.875 [2024-12-14 18:50:05.363543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.875 [2024-12-14 18:50:05.363587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.875 [2024-12-14 18:50:05.363611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.875 [2024-12-14 18:50:05.366019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.875 [2024-12-14 18:50:05.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.875 [2024-12-14 18:50:05.366060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.875 [2024-12-14 18:50:05.369229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.875 [2024-12-14 18:50:05.369260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.875 [2024-12-14 18:50:05.369271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.875 [2024-12-14 18:50:05.372927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.875 [2024-12-14 18:50:05.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.875 [2024-12-14 18:50:05.372970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.875 [2024-12-14 18:50:05.375497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.375538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.375548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.379594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.379650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.379663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.383514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.383547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.387247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.387280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.387292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.390054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.390084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.390095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.393345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.393378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.393390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.396856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.396888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.396899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.400662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.400702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.400713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.404517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.404551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.404562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.407004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.407046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.407057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.410665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.410705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.413428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.413461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.413472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.416372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.416404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.416415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.419875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.419907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.419919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.423212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.423245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.423256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.425564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.425595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.425617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.429172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.429204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.429215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.431984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.432016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.432026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.435001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.435033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.435049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.437879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.437911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.437922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.441636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.441677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.441688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.445492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.445536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.449014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.449045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.449056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.451175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.451205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.451216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.454670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.454709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.454720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.458224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.458256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.458267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.876 [2024-12-14 18:50:05.461296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.876 [2024-12-14 18:50:05.461337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.876 [2024-12-14 18:50:05.461348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.464858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.464890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.464900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.468774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.468807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.471353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.471384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.474707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.474738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.474749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.478747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.478779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.478791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.482185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.482216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.482227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.484845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.484876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.484887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.487909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.487941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.487952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.491098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.491132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.493950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.493981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.493992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.497306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.497337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.497348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.500566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.500608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.500620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.503450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.503482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.503493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.507369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.507411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.510134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.510165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.510176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.513509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.513542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.513553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.517343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.517385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.517396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.520193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.520225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.520237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.523557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.523592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.523617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.527232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.527266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.527277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.530716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.530747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.533333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.533390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.537011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.537040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.537051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.540384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.540444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.543802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.543865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.543877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.547336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.547388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.550243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.550274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.877 [2024-12-14 18:50:05.550285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.877 [2024-12-14 18:50:05.553183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.877 [2024-12-14 18:50:05.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.556182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.556214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.556226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.559264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.559297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.559307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.562694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.562725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.562736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.565753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.565784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.565795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.569008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.569041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.572070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.572102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.572113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.575059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.575090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.578040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.578071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.578081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.581102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.581147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.584464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.584498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.584509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.587866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.587899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.587911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.591487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.591520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.591532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.594699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.594729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.594740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.597491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.597535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.600805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.600839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.600851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.604176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.604209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.604220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.607427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.607460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.607472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.610466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.610499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.610510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.613566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.613609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.613622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.617018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.617050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.619687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.619718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.619729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.878 [2024-12-14 18:50:05.623197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:49.878 [2024-12-14 18:50:05.623231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.878 [2024-12-14 18:50:05.623243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.626518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.626560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.629692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.629730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.629742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.633296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.633333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.633345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.637085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.637119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.637131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.641054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.641088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.641100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.643866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.643899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.643911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.647730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.647760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.651044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.651079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.651090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.654345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.657445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.657487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.657498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.660802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.139 [2024-12-14 18:50:05.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.139 [2024-12-14 18:50:05.663832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.139 [2024-12-14 18:50:05.663859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.663870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.666650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.666676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.666687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.670180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.670209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.670220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.672789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.672817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.672828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.675880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.675909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.679423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.679452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.679463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.681904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.681931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.681941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.685354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.685382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.685393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.689473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.689502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.689513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.693164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.693192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.693203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.696630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.696658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.696669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.698797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.698824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.698834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.702425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.702454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.702464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.705304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.705333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.705344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.708018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.708047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.708058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.711389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.711419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.711430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.714263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.714292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.714302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.717555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.717584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.717594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.720528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.720567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.723349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.723378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.727248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.727287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.730081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.730108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.730120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.733408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.733437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.733447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.736776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.736805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.736816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.739468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.739496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.739507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.743114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.743145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.743157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.745945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.745985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.749272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.140 [2024-12-14 18:50:05.749302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.140 [2024-12-14 18:50:05.749313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.140 [2024-12-14 18:50:05.752576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.752616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.752628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.756039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.756068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.756079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.759284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.759314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.759325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.762411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.762441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.762452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.765948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.765989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.768655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.768682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.768693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.772012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.772042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.772052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.775160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.775189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.775201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.778714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.778742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.778752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.781166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.781194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.781205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.785108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.785136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.787911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.787938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.787949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.791145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.791174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.791185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.794311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.794339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.794349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.796961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.796990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.797001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.800154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.800183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.800194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.803311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.803340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.805939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.805967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.805978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.809260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.809289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.809299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.812133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.812161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.812171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.815575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.815618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.815630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.818923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.818952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.818963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.822172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.822202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.822213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.825357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.825386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.825397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.828256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.828285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.828296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.831777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.831807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.831817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.834703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.834732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.834742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.837934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.141 [2024-12-14 18:50:05.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.141 [2024-12-14 18:50:05.837972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.141 [2024-12-14 18:50:05.841156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.841185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.841196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.844608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.844635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.844645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.847668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.847697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.847707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.850503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.850532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.850542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.853327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.853355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.853365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.857037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.857067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.857077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.859508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.859536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.859546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.862899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.862927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.866178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.866206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.866217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.869182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.869212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.869222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.871752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.871781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.871791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.874950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.874997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.878330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.878358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.878369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.881311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.881340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.881350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.884251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.884283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.884295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.142 [2024-12-14 18:50:05.887751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.142 [2024-12-14 18:50:05.887780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.142 [2024-12-14 18:50:05.887790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.891083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.891113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.891124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.893452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.893497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.893507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.897019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.897048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.897059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.900450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.900480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.900490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.903977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.904006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.904018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.906557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.906595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.910533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.910562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.910573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.914581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.914618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.914629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.917418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.403 [2024-12-14 18:50:05.917447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.403 [2024-12-14 18:50:05.917458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.403 [2024-12-14 18:50:05.920469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.920497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.920508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.923392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.923422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.923432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.926803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.926832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.926847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.929627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.929656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.929666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.932548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.932577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.932587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.935614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.935642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.935652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.938284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.938311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.938322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.941461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.941490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.944794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.944824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.947998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.948026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.948036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.950387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.950415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.950425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.953635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.953663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.953673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.956661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.956700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.959664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.959692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.959704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.963321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.963350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.963360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.967379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.967408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.967418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.970908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.970934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.970945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.973244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.973273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.973284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.976300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.976329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.976340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.979535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.979563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.979574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.982448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.982475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.982486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.985270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.985299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.985309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.988963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.988993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.989004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.992343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.992372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.994977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.995013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.995023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:05.998240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:05.998268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:05.998279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:06.001108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:06.001136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:06.001147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.404 [2024-12-14 18:50:06.004674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.404 [2024-12-14 18:50:06.004701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.404 [2024-12-14 18:50:06.004712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.008205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.008232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.008243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.010617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.010644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.010654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.013775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.013803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.013814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.016795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.016823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.016833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.019554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.019583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.019594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.023193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.023223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.023234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.026034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.026062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.026073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.029038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.029068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.029078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.032582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.032620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.035369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.035397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.035408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.038881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.038909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.038919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.041587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.041640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.041652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.045341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.045370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.045381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.048154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.048184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.048195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.051718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.051747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.051758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.054788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.054817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.054828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.058098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.058128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.058140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.060924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.060953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.064176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.064206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.064217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.067793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.067822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.067833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.071156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.071187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.071198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.073822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.073851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.073861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.077781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.077811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.077822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.081444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.081474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.081484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.085170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.085199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.085210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.087299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.087342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.091203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.091232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.091243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.405 [2024-12-14 18:50:06.095128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.405 [2024-12-14 18:50:06.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.405 [2024-12-14 18:50:06.095167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.097964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.097991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.101494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.101523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.101533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.105379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.105408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.105419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.107809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.107837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.107847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.111108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.111136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.111147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.114236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.114265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.114275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.117333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.117361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.117378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.120669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.120706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.120722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.123251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.123280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.123290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.126505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.126532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.126543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.129347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.129375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.129386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.132475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.132504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.132515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.135882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.135912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.135922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.138730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.138758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.138768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.142331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.142359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.142370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.145637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.145664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.145674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.148310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.148339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.148352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.406 [2024-12-14 18:50:06.152364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.406 [2024-12-14 18:50:06.152393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.406 [2024-12-14 18:50:06.152404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.665 9685.00 IOPS, 1210.62 MiB/s [2024-12-14T18:50:06.418Z] [2024-12-14 18:50:06.156970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17e5930) 00:29:50.665 [2024-12-14 18:50:06.156998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.665 [2024-12-14 18:50:06.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.665 00:29:50.665 Latency(us) 00:29:50.665 [2024-12-14T18:50:06.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.665 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:50.665 nvme0n1 : 2.00 9679.11 1209.89 0.00 0.00 1649.86 450.56 9532.51 00:29:50.665 [2024-12-14T18:50:06.418Z] =================================================================================================================== 00:29:50.665 [2024-12-14T18:50:06.418Z] Total : 9679.11 1209.89 0.00 0.00 1649.86 450.56 9532.51 00:29:50.665 { 00:29:50.665 "results": [ 00:29:50.665 { 00:29:50.665 "job": "nvme0n1", 00:29:50.665 "core_mask": "0x2", 00:29:50.665 "workload": "randread", 00:29:50.665 "status": "finished", 00:29:50.665 "queue_depth": 16, 00:29:50.665 "io_size": 131072, 00:29:50.665 "runtime": 2.002871, 00:29:50.665 "iops": 9679.105643848256, 00:29:50.665 "mibps": 1209.888205481032, 00:29:50.665 "io_failed": 0, 00:29:50.665 "io_timeout": 0, 00:29:50.665 "avg_latency_us": 1649.8561458597114, 00:29:50.665 "min_latency_us": 450.56, 00:29:50.665 "max_latency_us": 9532.50909090909 00:29:50.665 } 00:29:50.665 ], 00:29:50.665 "core_count": 1 00:29:50.665 } 00:29:50.665 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:50.665 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:50.665 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:50.665 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:50.665 | .driver_specific 00:29:50.665 | .nvme_error 00:29:50.665 | .status_code 00:29:50.665 | .command_transient_transport_error' 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 625 > 0 )) 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 114974 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 114974 ']' 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 114974 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114974 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:50.924 killing process with pid 114974 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114974' 00:29:50.924 Received shutdown signal, test time was about 2.000000 seconds 00:29:50.924 00:29:50.924 Latency(us) 00:29:50.924 [2024-12-14T18:50:06.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.924 [2024-12-14T18:50:06.677Z] =================================================================================================================== 00:29:50.924 [2024-12-14T18:50:06.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 114974 00:29:50.924 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 114974 00:29:51.182 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115051 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115051 /var/tmp/bperf.sock 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115051 ']' 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:51.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.183 18:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.183 [2024-12-14 18:50:06.832754] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:51.183 [2024-12-14 18:50:06.832856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115051 ] 00:29:51.441 [2024-12-14 18:50:06.970801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.441 [2024-12-14 18:50:07.028228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.441 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.441 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:51.441 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:51.441 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.009 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.009 nvme0n1 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:52.268 18:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:52.268 Running I/O for 2 seconds... 00:29:52.268 [2024-12-14 18:50:07.880071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f6458 00:29:52.268 [2024-12-14 18:50:07.880975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.891927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e4de8 00:29:52.268 [2024-12-14 18:50:07.893338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.893377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.898846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e01f8 00:29:52.268 [2024-12-14 18:50:07.899495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.899523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.910523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f1868 00:29:52.268 [2024-12-14 18:50:07.911732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.911759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.919586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc998 00:29:52.268 [2024-12-14 18:50:07.920465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.920493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.929041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f4298 00:29:52.268 [2024-12-14 18:50:07.929974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.930000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.940546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e2c28 00:29:52.268 [2024-12-14 18:50:07.942030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.942056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.947410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:52.268 [2024-12-14 18:50:07.948106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.948132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.958944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f3a28 00:29:52.268 [2024-12-14 18:50:07.960191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.960217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.967906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fa7d8 00:29:52.268 [2024-12-14 18:50:07.968863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.968890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.977286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f20d8 00:29:52.268 [2024-12-14 18:50:07.978283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.978310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.988917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0a68 00:29:52.268 [2024-12-14 18:50:07.990483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.990508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:07.995794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e4578 00:29:52.268 [2024-12-14 18:50:07.996584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:07.996616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:08.007396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f5be8 00:29:52.268 [2024-12-14 18:50:08.008716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:08.008752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:52.268 [2024-12-14 18:50:08.016358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f8618 00:29:52.268 [2024-12-14 18:50:08.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.268 [2024-12-14 18:50:08.017432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:52.527 [2024-12-14 18:50:08.025925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eff18 00:29:52.527 [2024-12-14 18:50:08.026984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.527 [2024-12-14 18:50:08.027030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.037479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de8a8 00:29:52.528 [2024-12-14 18:50:08.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.039132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.044559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6738 00:29:52.528 [2024-12-14 18:50:08.045414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.045440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.056611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fef90 00:29:52.528 [2024-12-14 18:50:08.058045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.058071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.063780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7538 00:29:52.528 [2024-12-14 18:50:08.064375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.064401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.075624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198edd58 00:29:52.528 [2024-12-14 18:50:08.076746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.076771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.084584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e01f8 00:29:52.528 [2024-12-14 18:50:08.085469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.085496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.094054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e88f8 00:29:52.528 [2024-12-14 18:50:08.094944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.094969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.105617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc998 00:29:52.528 [2024-12-14 18:50:08.107043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.107069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.112458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f96f8 00:29:52.528 [2024-12-14 18:50:08.113119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.113150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.124084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ebb98 00:29:52.528 [2024-12-14 18:50:08.125274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.125300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.133019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:52.528 [2024-12-14 18:50:08.133972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.133998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.142491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eaab8 00:29:52.528 [2024-12-14 18:50:08.143527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.152339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6300 00:29:52.528 [2024-12-14 18:50:08.153307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.153333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.161526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ebb98 00:29:52.528 [2024-12-14 18:50:08.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.162407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.172791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fe720 00:29:52.528 [2024-12-14 18:50:08.174130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.174155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.181736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e38d0 00:29:52.528 [2024-12-14 18:50:08.182825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.182852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.191217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f4f40 00:29:52.528 [2024-12-14 18:50:08.192326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.192351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.200205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc128 00:29:52.528 [2024-12-14 18:50:08.201043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.201069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.209585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e5a90 00:29:52.528 [2024-12-14 18:50:08.210456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.210489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.221172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd208 00:29:52.528 [2024-12-14 18:50:08.222627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.222664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.228058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198df988 00:29:52.528 [2024-12-14 18:50:08.228699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.228725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.239578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e9e10 00:29:52.528 [2024-12-14 18:50:08.240758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.240783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.248497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f20d8 00:29:52.528 [2024-12-14 18:50:08.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.249450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.257887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e7c50 00:29:52.528 [2024-12-14 18:50:08.258850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.258876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.269414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e1b48 00:29:52.528 [2024-12-14 18:50:08.270896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:52.528 [2024-12-14 18:50:08.276278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e1b48 00:29:52.528 [2024-12-14 18:50:08.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.528 [2024-12-14 18:50:08.277022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.286152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed0b0 00:29:52.788 [2024-12-14 18:50:08.286856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.286882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.295418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f1868 00:29:52.788 [2024-12-14 18:50:08.296010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.296036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.306844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e38d0 00:29:52.788 [2024-12-14 18:50:08.307699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.315658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6300 00:29:52.788 [2024-12-14 18:50:08.316510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.316535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.325494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fa7d8 00:29:52.788 [2024-12-14 18:50:08.326345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.326379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.334675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de038 00:29:52.788 [2024-12-14 18:50:08.335409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.343814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2510 00:29:52.788 [2024-12-14 18:50:08.344402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.344427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.355186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f57b0 00:29:52.788 [2024-12-14 18:50:08.355923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.355951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.364348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2510 00:29:52.788 [2024-12-14 18:50:08.364994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.365020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.373489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fa3a0 00:29:52.788 [2024-12-14 18:50:08.373975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.373999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.384492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e84c0 00:29:52.788 [2024-12-14 18:50:08.385635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.385677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.393744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fe2e8 00:29:52.788 [2024-12-14 18:50:08.394785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.394810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.402667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eb760 00:29:52.788 [2024-12-14 18:50:08.403539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.403577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.412058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0630 00:29:52.788 [2024-12-14 18:50:08.412949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.412974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.423693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f8e88 00:29:52.788 [2024-12-14 18:50:08.425117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.425143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.430520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e4de8 00:29:52.788 [2024-12-14 18:50:08.431195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.442085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e5ec8 00:29:52.788 [2024-12-14 18:50:08.443285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.443312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.451080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f35f0 00:29:52.788 [2024-12-14 18:50:08.452140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.452166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.461557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e1f80 00:29:52.788 [2024-12-14 18:50:08.462548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.462579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.470972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f31b8 00:29:52.788 [2024-12-14 18:50:08.471989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.472020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.480743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e12d8 00:29:52.788 [2024-12-14 18:50:08.481599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.481633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.490277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:52.788 [2024-12-14 18:50:08.490955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.490985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.500470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ecc78 00:29:52.788 [2024-12-14 18:50:08.501325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.501357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.509757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fb048 00:29:52.788 [2024-12-14 18:50:08.510432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.788 [2024-12-14 18:50:08.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:52.788 [2024-12-14 18:50:08.522009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed4e8 00:29:52.788 [2024-12-14 18:50:08.523498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.789 [2024-12-14 18:50:08.523529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.789 [2024-12-14 18:50:08.531245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f92c0 00:29:52.789 [2024-12-14 18:50:08.532815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.789 [2024-12-14 18:50:08.532841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.540671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f57b0 00:29:53.048 [2024-12-14 18:50:08.541902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.541934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.549932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6b70 00:29:53.048 [2024-12-14 18:50:08.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.551285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.559382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ddc00 00:29:53.048 [2024-12-14 18:50:08.560361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.560392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.568624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed0b0 00:29:53.048 [2024-12-14 18:50:08.569477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.569520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.580436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e5220 00:29:53.048 [2024-12-14 18:50:08.581948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.581978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.587344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:53.048 [2024-12-14 18:50:08.588257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.599199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f8a50 00:29:53.048 [2024-12-14 18:50:08.600464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.600494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.607804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd208 00:29:53.048 [2024-12-14 18:50:08.609254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.609286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.618224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e8d30 00:29:53.048 [2024-12-14 18:50:08.619217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.619249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.627690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e95a0 00:29:53.048 [2024-12-14 18:50:08.628308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.628340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.636892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fdeb0 00:29:53.048 [2024-12-14 18:50:08.637582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.637616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.648110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f81e0 00:29:53.048 [2024-12-14 18:50:08.649463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.657221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ea248 00:29:53.048 [2024-12-14 18:50:08.658213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.658244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.666662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2948 00:29:53.048 [2024-12-14 18:50:08.667759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.667785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.676038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f31b8 00:29:53.048 [2024-12-14 18:50:08.676808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.676838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.685256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ebb98 00:29:53.048 [2024-12-14 18:50:08.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.685942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.697522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fe2e8 00:29:53.048 [2024-12-14 18:50:08.698986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.699179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.704660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198feb58 00:29:53.048 [2024-12-14 18:50:08.705592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.705627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.716401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e9e10 00:29:53.048 [2024-12-14 18:50:08.717765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.717790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.725798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2948 00:29:53.048 [2024-12-14 18:50:08.726843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.726882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.734970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e8d30 00:29:53.048 [2024-12-14 18:50:08.736084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.736117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.744455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198efae0 00:29:53.048 [2024-12-14 18:50:08.745278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.745439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.754111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de038 00:29:53.048 [2024-12-14 18:50:08.754985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.755024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.766848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7100 00:29:53.048 [2024-12-14 18:50:08.768373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.768534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.774146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6738 00:29:53.048 [2024-12-14 18:50:08.774946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.775120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.786049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e1f80 00:29:53.048 [2024-12-14 18:50:08.787434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.787465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:53.048 [2024-12-14 18:50:08.795158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de038 00:29:53.048 [2024-12-14 18:50:08.796233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.048 [2024-12-14 18:50:08.796265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.804698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0630 00:29:53.308 [2024-12-14 18:50:08.805997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.806147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.814223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e7818 00:29:53.308 [2024-12-14 18:50:08.815084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.815117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.823783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed4e8 00:29:53.308 [2024-12-14 18:50:08.824834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.824859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.835596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e7c50 00:29:53.308 [2024-12-14 18:50:08.837215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.837241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.842662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ecc78 00:29:53.308 [2024-12-14 18:50:08.843512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.843537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.852733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e84c0 00:29:53.308 [2024-12-14 18:50:08.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.853510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:53.308 25937.00 IOPS, 101.32 MiB/s [2024-12-14T18:50:09.061Z] [2024-12-14 18:50:08.866144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f57b0 00:29:53.308 [2024-12-14 18:50:08.867643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.867820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.875718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f31b8 00:29:53.308 [2024-12-14 18:50:08.876807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.876976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.885450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dece0 00:29:53.308 [2024-12-14 18:50:08.886756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.886921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.897808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dfdc0 00:29:53.308 [2024-12-14 18:50:08.899613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.899809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.905326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e9e10 00:29:53.308 [2024-12-14 18:50:08.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.906512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.917518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e7818 00:29:53.308 [2024-12-14 18:50:08.919093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.924970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0630 00:29:53.308 [2024-12-14 18:50:08.925765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.925958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.937232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0ea0 00:29:53.308 [2024-12-14 18:50:08.938410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.308 [2024-12-14 18:50:08.938442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:53.308 [2024-12-14 18:50:08.946336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fa3a0 00:29:53.308 [2024-12-14 18:50:08.947250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.947294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:08.956111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dece0 00:29:53.309 [2024-12-14 18:50:08.957105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.957131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:08.965590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e8088 00:29:53.309 [2024-12-14 18:50:08.966251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.966282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:08.977297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6b70 00:29:53.309 [2024-12-14 18:50:08.978655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.978685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:08.987292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fb8b8 00:29:53.309 [2024-12-14 18:50:08.988810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:08.996841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f9f68 00:29:53.309 [2024-12-14 18:50:08.998097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:08.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.006323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc128 00:29:53.309 [2024-12-14 18:50:09.007465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:09.007657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.016045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dfdc0 00:29:53.309 [2024-12-14 18:50:09.017269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:09.017434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.028199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0a68 00:29:53.309 [2024-12-14 18:50:09.029977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:09.030146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.035725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f0350 00:29:53.309 [2024-12-14 18:50:09.036758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:09.036925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.048431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eaef0 00:29:53.309 [2024-12-14 18:50:09.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.309 [2024-12-14 18:50:09.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:53.309 [2024-12-14 18:50:09.058286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd208 00:29:53.568 [2024-12-14 18:50:09.059522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.059736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.568 [2024-12-14 18:50:09.068665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:53.568 [2024-12-14 18:50:09.069893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.070061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.568 [2024-12-14 18:50:09.078762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0ea0 00:29:53.568 [2024-12-14 18:50:09.079745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.079913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:53.568 [2024-12-14 18:50:09.088369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fe720 00:29:53.568 [2024-12-14 18:50:09.089162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.089193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:53.568 [2024-12-14 18:50:09.097943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dece0 00:29:53.568 [2024-12-14 18:50:09.098419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.098444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:53.568 [2024-12-14 18:50:09.109024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e23b8 00:29:53.568 [2024-12-14 18:50:09.110160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.568 [2024-12-14 18:50:09.110192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.119954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f6890 00:29:53.569 [2024-12-14 18:50:09.121469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.121499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.127434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f4b08 00:29:53.569 [2024-12-14 18:50:09.128493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.128519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.138581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd208 00:29:53.569 [2024-12-14 18:50:09.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.139583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.150775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e12d8 00:29:53.569 [2024-12-14 18:50:09.152541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.158101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fe2e8 00:29:53.569 [2024-12-14 18:50:09.159010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.159035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.170048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fb8b8 00:29:53.569 [2024-12-14 18:50:09.171176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.171207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.179454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7100 00:29:53.569 [2024-12-14 18:50:09.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.180485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.188918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198dfdc0 00:29:53.569 [2024-12-14 18:50:09.190006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.190049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.199115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ee5c8 00:29:53.569 [2024-12-14 18:50:09.199647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.199675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.208838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e9168 00:29:53.569 [2024-12-14 18:50:09.209623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.209652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.218066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2d80 00:29:53.569 [2024-12-14 18:50:09.218857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.218896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.229837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e73e0 00:29:53.569 [2024-12-14 18:50:09.231180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.231210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.238526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7100 00:29:53.569 [2024-12-14 18:50:09.240145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.240175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.249075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eee38 00:29:53.569 [2024-12-14 18:50:09.249913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.249945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.258408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f3a28 00:29:53.569 [2024-12-14 18:50:09.259099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.259131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.267666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e3060 00:29:53.569 [2024-12-14 18:50:09.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.268456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.278959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f92c0 00:29:53.569 [2024-12-14 18:50:09.280420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.280466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.288250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fa3a0 00:29:53.569 [2024-12-14 18:50:09.289299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.289346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.297927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e95a0 00:29:53.569 [2024-12-14 18:50:09.299287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.299314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.308131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc128 00:29:53.569 [2024-12-14 18:50:09.309261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.309291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:53.569 [2024-12-14 18:50:09.317446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f3a28 00:29:53.569 [2024-12-14 18:50:09.318613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.569 [2024-12-14 18:50:09.318648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:53.828 [2024-12-14 18:50:09.327285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e8d30 00:29:53.828 [2024-12-14 18:50:09.328266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.828 [2024-12-14 18:50:09.328296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:53.828 [2024-12-14 18:50:09.337622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc560 00:29:53.828 [2024-12-14 18:50:09.338736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.828 [2024-12-14 18:50:09.338766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:53.828 [2024-12-14 18:50:09.349377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f3a28 00:29:53.828 [2024-12-14 18:50:09.351254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.828 [2024-12-14 18:50:09.351279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.828 [2024-12-14 18:50:09.356660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f0350 00:29:53.828 [2024-12-14 18:50:09.357548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.828 [2024-12-14 18:50:09.357742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.828 [2024-12-14 18:50:09.368609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2510 00:29:53.828 [2024-12-14 18:50:09.370020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.828 [2024-12-14 18:50:09.370051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.375535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e5ec8 00:29:53.829 [2024-12-14 18:50:09.376198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.376223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.387348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fb8b8 00:29:53.829 [2024-12-14 18:50:09.388747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.388772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.396708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198edd58 00:29:53.829 [2024-12-14 18:50:09.397620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.397662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.406330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e0630 00:29:53.829 [2024-12-14 18:50:09.407500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.407525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.418245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7da8 00:29:53.829 [2024-12-14 18:50:09.419795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.419824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.425390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7100 00:29:53.829 [2024-12-14 18:50:09.426321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.426351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.437424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd640 00:29:53.829 [2024-12-14 18:50:09.438936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.438962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.446997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ff3c8 00:29:53.829 [2024-12-14 18:50:09.448004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.448035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.456657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed0b0 00:29:53.829 [2024-12-14 18:50:09.457714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.457745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.468610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ebfd0 00:29:53.829 [2024-12-14 18:50:09.470413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.470437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.476060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e3498 00:29:53.829 [2024-12-14 18:50:09.476689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.476848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.488642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f0350 00:29:53.829 [2024-12-14 18:50:09.490074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.490104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.497948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fdeb0 00:29:53.829 [2024-12-14 18:50:09.499343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.499379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.506223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f92c0 00:29:53.829 [2024-12-14 18:50:09.506947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.506972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.516327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7970 00:29:53.829 [2024-12-14 18:50:09.517235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.517266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.528139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2510 00:29:53.829 [2024-12-14 18:50:09.529606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.538135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd640 00:29:53.829 [2024-12-14 18:50:09.539596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.539636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.547468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ee190 00:29:53.829 [2024-12-14 18:50:09.548872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.548901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.556888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f1430 00:29:53.829 [2024-12-14 18:50:09.558093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.558259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.566572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e3060 00:29:53.829 [2024-12-14 18:50:09.567655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.567702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:53.829 [2024-12-14 18:50:09.576404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f0bc0 00:29:53.829 [2024-12-14 18:50:09.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.829 [2024-12-14 18:50:09.577704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:54.088 [2024-12-14 18:50:09.586451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e8088 00:29:54.088 [2024-12-14 18:50:09.587237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.088 [2024-12-14 18:50:09.587265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:54.088 [2024-12-14 18:50:09.596165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de038 00:29:54.089 [2024-12-14 18:50:09.596781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.596856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.608003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6738 00:29:54.089 [2024-12-14 18:50:09.609491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.609521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.615040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd640 00:29:54.089 [2024-12-14 18:50:09.615951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.615975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.627080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ea680 00:29:54.089 [2024-12-14 18:50:09.628353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.628383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.637057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed4e8 00:29:54.089 [2024-12-14 18:50:09.638150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.638175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.647128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e6b70 00:29:54.089 [2024-12-14 18:50:09.648310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.648340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.656563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ec408 00:29:54.089 [2024-12-14 18:50:09.657795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.657819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.666123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fb048 00:29:54.089 [2024-12-14 18:50:09.667023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.667188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.675729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e7c50 00:29:54.089 [2024-12-14 18:50:09.676696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.676720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.687660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fbcf0 00:29:54.089 [2024-12-14 18:50:09.689113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.689143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.694730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198ed0b0 00:29:54.089 [2024-12-14 18:50:09.695573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.695606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.706944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fc560 00:29:54.089 [2024-12-14 18:50:09.708173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.708203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.716139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198eaef0 00:29:54.089 [2024-12-14 18:50:09.717016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.717047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.725689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f8e88 00:29:54.089 [2024-12-14 18:50:09.726599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.726646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.737471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fd640 00:29:54.089 [2024-12-14 18:50:09.739247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.739281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.748163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fcdd0 00:29:54.089 [2024-12-14 18:50:09.749742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.757881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e27f0 00:29:54.089 [2024-12-14 18:50:09.759336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.759367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.767630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198e99d8 00:29:54.089 [2024-12-14 18:50:09.768940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.768970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.777381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198de038 00:29:54.089 [2024-12-14 18:50:09.778512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.778546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.788919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f96f8 00:29:54.089 [2024-12-14 18:50:09.790577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.790615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.796000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f31b8 00:29:54.089 [2024-12-14 18:50:09.796842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.796872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.807822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198fef90 00:29:54.089 [2024-12-14 18:50:09.809237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.809267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.817031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f1430 00:29:54.089 [2024-12-14 18:50:09.818179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.818209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.826805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2948 00:29:54.089 [2024-12-14 18:50:09.827836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.827867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:54.089 [2024-12-14 18:50:09.836157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f3a28 00:29:54.089 [2024-12-14 18:50:09.837033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.089 [2024-12-14 18:50:09.837195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:54.357 [2024-12-14 18:50:09.846132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f7538 00:29:54.357 [2024-12-14 18:50:09.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.357 [2024-12-14 18:50:09.847378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:54.357 [2024-12-14 18:50:09.856301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10aed20) with pdu=0x2000198f2510 00:29:54.357 [2024-12-14 18:50:09.856890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.357 [2024-12-14 18:50:09.856932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:54.357 25674.50 IOPS, 100.29 MiB/s 00:29:54.357 Latency(us) 00:29:54.357 [2024-12-14T18:50:10.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.357 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.357 nvme0n1 : 2.00 25690.01 100.35 0.00 0.00 4976.64 1966.08 15013.70 00:29:54.357 [2024-12-14T18:50:10.110Z] =================================================================================================================== 00:29:54.357 [2024-12-14T18:50:10.110Z] Total : 25690.01 100.35 0.00 0.00 4976.64 1966.08 15013.70 00:29:54.357 { 00:29:54.357 "results": [ 00:29:54.357 { 00:29:54.357 "job": "nvme0n1", 00:29:54.357 "core_mask": "0x2", 00:29:54.357 "workload": "randwrite", 00:29:54.357 "status": "finished", 00:29:54.357 "queue_depth": 128, 00:29:54.357 "io_size": 4096, 00:29:54.357 "runtime": 2.003775, 00:29:54.357 "iops": 25690.010105925066, 00:29:54.357 "mibps": 100.35160197626979, 00:29:54.357 "io_failed": 0, 00:29:54.357 "io_timeout": 0, 00:29:54.357 "avg_latency_us": 4976.638806457253, 00:29:54.357 "min_latency_us": 1966.08, 00:29:54.357 "max_latency_us": 15013.701818181818 00:29:54.357 } 00:29:54.357 ], 00:29:54.357 "core_count": 1 00:29:54.357 } 00:29:54.357 18:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:54.357 18:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:54.357 18:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:54.357 18:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:54.357 | .driver_specific 00:29:54.357 | .nvme_error 00:29:54.357 | .status_code 00:29:54.357 | .command_transient_transport_error' 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115051 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115051 ']' 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115051 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115051 00:29:54.631 killing process with pid 115051 00:29:54.631 Received shutdown signal, test time was about 2.000000 seconds 00:29:54.631 00:29:54.631 Latency(us) 00:29:54.631 [2024-12-14T18:50:10.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.631 [2024-12-14T18:50:10.384Z] =================================================================================================================== 00:29:54.631 [2024-12-14T18:50:10.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115051' 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115051 00:29:54.631 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115051 00:29:54.889 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:54.889 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:54.889 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:54.889 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115128 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115128 /var/tmp/bperf.sock 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115128 ']' 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:54.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.890 18:50:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.890 [2024-12-14 18:50:10.590880] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:54.890 [2024-12-14 18:50:10.591306] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115128 ] 00:29:54.890 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:54.890 Zero copy mechanism will not be used. 00:29:55.147 [2024-12-14 18:50:10.731646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.148 [2024-12-14 18:50:10.811538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.081 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.081 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:56.081 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.081 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.340 18:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.599 nvme0n1 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:56.600 18:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:56.600 Zero copy mechanism will not be used. 00:29:56.600 Running I/O for 2 seconds... 00:29:56.600 [2024-12-14 18:50:12.272104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.272410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.272467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.276860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.277129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.277190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.281504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.281824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.281853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.286011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.286329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.286361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.290583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.290912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.290947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.295238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.295608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.295655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.299717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.300018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.300073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.304416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.304719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.304757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.308960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.309276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.309327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.313513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.313842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.313881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.317987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.318306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.318335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.322436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.322730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.322796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.327087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.327395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.327427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.331628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.331909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.331946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.336191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.336508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.336540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.340668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.340970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.341005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.345129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.345402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.345476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.600 [2024-12-14 18:50:12.349675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.600 [2024-12-14 18:50:12.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.600 [2024-12-14 18:50:12.350018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.860 [2024-12-14 18:50:12.354197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.354485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.358840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.359145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.359176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.363443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.363736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.363789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.367911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.368203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.368247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.372384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.372715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.372737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.376949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.377237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.377284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.381393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.381708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.381736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.385831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.386123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.390247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.390508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.390565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.394597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.394938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.394974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.399061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.399411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.403555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.403884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.403912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.408175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.408462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.408517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.412685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.413000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.413036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.417216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.417524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.417556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.421782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.422094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.422126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.426398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.426717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.426739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.430849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.431152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.431177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.435288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.435570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.439855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.440160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.440194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.444340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.444621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.444686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.448861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.449157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.453430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.453737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.453774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.457949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.458264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.462438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.462760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.462800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.466948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.861 [2024-12-14 18:50:12.467289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.861 [2024-12-14 18:50:12.467318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.861 [2024-12-14 18:50:12.471441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.471716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.471770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.476012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.476289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.476327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.480598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.480947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.485110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.485404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.489599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.489907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.489930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.494155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.494461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.494493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.498742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.499088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.499118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.503254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.503511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.503562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.507628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.507949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.507987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.512239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.512527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.512566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.516807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.517105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.517145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.521310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.521602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.521679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.526095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.526420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.530900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.531209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.531238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.535638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.535960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.535988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.540549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.540912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.540948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.545393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.545707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.545735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.550136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.550446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.550480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.554838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.555169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.555204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.559648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.559972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.560006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.564395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.564742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.564782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.569184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.569500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.569534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.574020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.578441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.578711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.578746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.582923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.583206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.583256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.587473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.587757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.587813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.592009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.592295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.592341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.862 [2024-12-14 18:50:12.596493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.862 [2024-12-14 18:50:12.596788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.862 [2024-12-14 18:50:12.596845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.863 [2024-12-14 18:50:12.601177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.863 [2024-12-14 18:50:12.601475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.863 [2024-12-14 18:50:12.601502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.863 [2024-12-14 18:50:12.605560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.863 [2024-12-14 18:50:12.605831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.863 [2024-12-14 18:50:12.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.863 [2024-12-14 18:50:12.609936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:56.863 [2024-12-14 18:50:12.610211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.863 [2024-12-14 18:50:12.610245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.614427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.614773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.618797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.619110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.619142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.623354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.623634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.623670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.628091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.628365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.628400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.632504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.632802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.632855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.637051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.637323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.637385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.641723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.642030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.642059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.646318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.646611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.646631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.650778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.651086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.651118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.655302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.655573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.659676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.659949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.659983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.663993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.664279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.664315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.668460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.668753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.668804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.673060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.123 [2024-12-14 18:50:12.673404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.123 [2024-12-14 18:50:12.677529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.123 [2024-12-14 18:50:12.677861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.677888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.681962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.682230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.686449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.686716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.686752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.690790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.691127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.695220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.695507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.695558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.699599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.699886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.699927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.704167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.704446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.704476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.708704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.708988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.709021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.713262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.713562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.713593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.717849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.718110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.722234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.722543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.722572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.726786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.727079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.731377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.731691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.735823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.736110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.736143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.740360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.740682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.740702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.744943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.745242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.745278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.749565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.749880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.749918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.754203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.754490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.758852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.759157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.759187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.763516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.763849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.763888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.768068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.768428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.772662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.772921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.772997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.777196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.777487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.777509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.781719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.782001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.782035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.786222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.786523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.786550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.790688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.791029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.791063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.795253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.795576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.795629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.800000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.124 [2024-12-14 18:50:12.800312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.124 [2024-12-14 18:50:12.800345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.124 [2024-12-14 18:50:12.804666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.804975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.805001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.809274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.809581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.809634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.813841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.814132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.814175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.818354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.818673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.818701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.822873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.823170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.827574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.827877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.827928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.832042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.832297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.832371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.836519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.836799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.840953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.841223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.841295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.845456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.845801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.849958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.850302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.854414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.854740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.854769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.858944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.859240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.859276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.863491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.863790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.863834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.868077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.868355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.868399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.125 [2024-12-14 18:50:12.872671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.125 [2024-12-14 18:50:12.872975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.125 [2024-12-14 18:50:12.872996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.877264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.877569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.881818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.882111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.882144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.886315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.886622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.886654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.890815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.891133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.891167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.895497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.895789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.900070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.900352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.900388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.904605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.904929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.909055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.909324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.913601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.913891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.913929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.918112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.918419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.918457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.922528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.922864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.922899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.926891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.927155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.927209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.931303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.931589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.931656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.385 [2024-12-14 18:50:12.935687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.385 [2024-12-14 18:50:12.935979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.385 [2024-12-14 18:50:12.936013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.940056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.940346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.940399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.944519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.944846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.944882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.949026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.949340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.949370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.953471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.953735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.953810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.957924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.958217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.958252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.962361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.962677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.962716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.966843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.967152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.967191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.971404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.971724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.971751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.975893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.976209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.976238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.980481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.980767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.980809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.985032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.985337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.985374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.989567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.989872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.989895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.994177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.994483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:12.998804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:12.999115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:12.999135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.003361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.003682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.003702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.007873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.008154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.012339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.012660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.012687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.016835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.017117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.017144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.021293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.021601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.021621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.025845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.026125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.026165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.030365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.030685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.034872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.035191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.039312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.039607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.039627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.043793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.044072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.044100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.048455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.048750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.048786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.053096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.053425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.053465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.057770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.058093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.386 [2024-12-14 18:50:13.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.386 [2024-12-14 18:50:13.062441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.386 [2024-12-14 18:50:13.062758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.062801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.067156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.067479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.067514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.071896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.072190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.072217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.076539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.076870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.076906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.081014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.081330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.081358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.085494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.085805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.085848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.089950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.090236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.090295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.094403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.094718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.094741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.099058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.099405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.103613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.103930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.103969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.108087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.108402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.108430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.112580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.112909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.112944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.117072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.117341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.117400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.121571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.121883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.121912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.126048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.126363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.130545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.130882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.130917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.387 [2024-12-14 18:50:13.134983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.387 [2024-12-14 18:50:13.135324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.387 [2024-12-14 18:50:13.135355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.647 [2024-12-14 18:50:13.139493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.139776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.139835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.143981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.144309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.144339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.148631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.148937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.148962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.153212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.153514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.157794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.158088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.158122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.162292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.162584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.162620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.166878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.167171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.167203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.171454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.171765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.175906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.176205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.176251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.180465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.180783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.180816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.185025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.185329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.189661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.189968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.190005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.194205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.194484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.194522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.198745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.199047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.199083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.203179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.203529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.207704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.208021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.208050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.212200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.212468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.212525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.217413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.217744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.217781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.222007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.222323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.222359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.226442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.226788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.226817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.231014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.231334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.235594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.235903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.235931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.240112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.240400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.240444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.244621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.244901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.244944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.249196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.249484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.249522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.253720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.254015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.648 [2024-12-14 18:50:13.254053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.648 [2024-12-14 18:50:13.258243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.648 [2024-12-14 18:50:13.258522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.258565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.262744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.263045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.263083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 6812.00 IOPS, 851.50 MiB/s [2024-12-14T18:50:13.402Z] [2024-12-14 18:50:13.268023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.268283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.268318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.272489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.272800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.272821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.277020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.277291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.277343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.281437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.281747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.281785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.286082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.286383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.286412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.290691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.291000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.291028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.295331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.295640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.295677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.299889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.300170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.300204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.304429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.304696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.304775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.308977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.309300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.309329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.313359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.313678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.313705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.317904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.318185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.318220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.322364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.322683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.322710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.326829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.327153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.327189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.331396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.331701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.331729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.335942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.336264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.336295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.340500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.340824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.340854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.345049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.345358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.349570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.349894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.349924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.354116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.354396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.358601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.358921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.358946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.363129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.363453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.363494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.367499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.367808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.367851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.371987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.372301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.372340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.376579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.376899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.376925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.381123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.649 [2024-12-14 18:50:13.381428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.649 [2024-12-14 18:50:13.381466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.649 [2024-12-14 18:50:13.385693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.650 [2024-12-14 18:50:13.386003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.650 [2024-12-14 18:50:13.386024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.650 [2024-12-14 18:50:13.390233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.650 [2024-12-14 18:50:13.390537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.650 [2024-12-14 18:50:13.390575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.650 [2024-12-14 18:50:13.394749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.650 [2024-12-14 18:50:13.395036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.650 [2024-12-14 18:50:13.395083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.399368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.399641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.399692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.403832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.404124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.404159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.408318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.408635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.408671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.412951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.413295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.413325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.417558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.417867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.417897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.422095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.422389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.422411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.426707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.427049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.427095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.431371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.431649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.435870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.436148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.436175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.440373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.440714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.444900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.445194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.445239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.910 [2024-12-14 18:50:13.449409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.910 [2024-12-14 18:50:13.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.910 [2024-12-14 18:50:13.449736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.453827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.454132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.454175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.458190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.458462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.458520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.462589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.462905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.462934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.467037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.467365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.467409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.471647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.471917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.471976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.476239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.476511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.476572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.480690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.480969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.481012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.485230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.485505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.485548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.489779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.490088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.490119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.494390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.494702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.494729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.498917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.499296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.503560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.503890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.503921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.508229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.508561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.512748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.513041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.513077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.517370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.517688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.517711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.521903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.522184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.522217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.526438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.526745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.526792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.531031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.531395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.531430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.535408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.535676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.535727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.539916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.540222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.540263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.544303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.544560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.544611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.548805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.549118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.549152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.553351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.553623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.553701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.558144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.558436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.558464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.562833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.563161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.563192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.567641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.567959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.567987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.572397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.911 [2024-12-14 18:50:13.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.911 [2024-12-14 18:50:13.572758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.911 [2024-12-14 18:50:13.577263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.577532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.582064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.582379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.582408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.586731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.587056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.591405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.591727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.591753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.596145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.596473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.600833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.601155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.601183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.605420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.605730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.605775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.609891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.610181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.610212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.614351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.614669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.614710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.618918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.619213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.619248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.623490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.623781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.623823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.628106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.628404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.628432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.632656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.632964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.633000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.637220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.637534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.637564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.641672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.642007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.646037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.646326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.646370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.650473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.650764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.650822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.654866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.655181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.912 [2024-12-14 18:50:13.659397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:57.912 [2024-12-14 18:50:13.659719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.912 [2024-12-14 18:50:13.659744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.663970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.664262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.664305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.668450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.668738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.668781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.673039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.673347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.673385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.677579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.677900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.677928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.682176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.682481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.682512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.686672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.172 [2024-12-14 18:50:13.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.172 [2024-12-14 18:50:13.687002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.172 [2024-12-14 18:50:13.691198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.691531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.691562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.695714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.696058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.700107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.700425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.700454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.704768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.705043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.709271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.709590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.713830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.714137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.714165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.718259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.718531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.718574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.722537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.722848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.722878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.727111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.727470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.731594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.731892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.731935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.736034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.736349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.736377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.740630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.740936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.745363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.745687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.745712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.750015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.750314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.750342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.754611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.754975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.759250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.759627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.763854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.764169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.764199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.768466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.768746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.768786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.773005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.773342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.777590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.777908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.777936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.782056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.782371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.782401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.786532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.786858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.786887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.790970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.791305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.791339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.795517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.795845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.795873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.800131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.800445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.800473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.804657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.804945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.804991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.809083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.809398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.809442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.173 [2024-12-14 18:50:13.813642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.173 [2024-12-14 18:50:13.813936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.173 [2024-12-14 18:50:13.813980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.818387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.818721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.818750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.823057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.823389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.823420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.827727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.828060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.832300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.832575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.832627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.837050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.837341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.837369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.841769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.842076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.842107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.846442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.846746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.846766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.851183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.851508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.851542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.855928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.856234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.856267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.860664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.861003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.861032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.865356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.865625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.865692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.870002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.870287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.870330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.874440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.874742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.874783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.878944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.879244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.879271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.883408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.883707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.883739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.887725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.887998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.888047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.892181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.892492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.892523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.896659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.896932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.896973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.901153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.901431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.901474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.905661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.905919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.905954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.909918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.910180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.910230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.914240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.914496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.914545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.174 [2024-12-14 18:50:13.918532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.174 [2024-12-14 18:50:13.918850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.174 [2024-12-14 18:50:13.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.435 [2024-12-14 18:50:13.923027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.435 [2024-12-14 18:50:13.923296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.435 [2024-12-14 18:50:13.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.435 [2024-12-14 18:50:13.927634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.435 [2024-12-14 18:50:13.927981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.435 [2024-12-14 18:50:13.928008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.435 [2024-12-14 18:50:13.932049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.435 [2024-12-14 18:50:13.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.435 [2024-12-14 18:50:13.932378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.435 [2024-12-14 18:50:13.936542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.435 [2024-12-14 18:50:13.936855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.435 [2024-12-14 18:50:13.936886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.940951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.941233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.945335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.945635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.945673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.949738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.950039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.950084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.954185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.954457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.954498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.958635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.958914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.958940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.963100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.963393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.963422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.967565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.967890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.967919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.972012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.972328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.976365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.976661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.976696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.980805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.981097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.981117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.985073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.985373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.985400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.989557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.989860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.989880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.993935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.994192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.994268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:13.998347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:13.998621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:13.998690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.002882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.003193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.003230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.007472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.007765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.007807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.011835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.012119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.012159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.016196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.016453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.016511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.020503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.020814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.020845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.024924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.025233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.025270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.029337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.029591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.029641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.033684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.033957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.034000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.038146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.038459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.042419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.042714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.042764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.047140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.047475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.047509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.051628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.051960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.051998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.436 [2024-12-14 18:50:14.056207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.436 [2024-12-14 18:50:14.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.436 [2024-12-14 18:50:14.056535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.060551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.060875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.060904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.064928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.065228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.065249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.069358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.069695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.073842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.074099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.074165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.078868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.079228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.079260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.083610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.083916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.083958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.088078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.088377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.092497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.092811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.092854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.097078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.097357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.097392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.101444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.101712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.101748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.105914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.106204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.106247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.110525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.110830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.110853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.115154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.115465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.115499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.119805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.120114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.120140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.124411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.124713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.124740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.128943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.129243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.129274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.133314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.133600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.133651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.137705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.138007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.138036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.142145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.142425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.142453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.146478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.146743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.150881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.151180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.151231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.155270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.155589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.155622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.159619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.159893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.159952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.164136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.164438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.164483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.168645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.168947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.168991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.173259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.173550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.173592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.177639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.177912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.437 [2024-12-14 18:50:14.177970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.437 [2024-12-14 18:50:14.182087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.437 [2024-12-14 18:50:14.182359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.438 [2024-12-14 18:50:14.182415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.186583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.186906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.186944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.191161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.191475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.191502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.195807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.196084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.196127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.200320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.200599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.200657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.204774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.205074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.205118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.209277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.209582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.209621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.213709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.213983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.214034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.218125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.218400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.218458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.222493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.222763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.222836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.226901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.227234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.227268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.231514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.231828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.231855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.236067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.236338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.236397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.240519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.697 [2024-12-14 18:50:14.240839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.697 [2024-12-14 18:50:14.240867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.697 [2024-12-14 18:50:14.245054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.245349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.245380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.698 [2024-12-14 18:50:14.249494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.249790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.249849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.698 [2024-12-14 18:50:14.254025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.254333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.254360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.698 [2024-12-14 18:50:14.258437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.258716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.698 [2024-12-14 18:50:14.262801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.263095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.263146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.698 6831.00 IOPS, 853.88 MiB/s [2024-12-14T18:50:14.451Z] [2024-12-14 18:50:14.268033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10af060) with pdu=0x2000198fef90 00:29:58.698 [2024-12-14 18:50:14.268313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.698 [2024-12-14 18:50:14.268347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.698 00:29:58.698 Latency(us) 00:29:58.698 [2024-12-14T18:50:14.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.698 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:58.698 nvme0n1 : 2.00 6829.29 853.66 0.00 0.00 2338.14 1772.45 5302.46 00:29:58.698 [2024-12-14T18:50:14.451Z] =================================================================================================================== 00:29:58.698 [2024-12-14T18:50:14.451Z] Total : 6829.29 853.66 0.00 0.00 2338.14 1772.45 5302.46 00:29:58.698 { 00:29:58.698 "results": [ 00:29:58.698 { 00:29:58.698 "job": "nvme0n1", 00:29:58.698 "core_mask": "0x2", 00:29:58.698 "workload": "randwrite", 00:29:58.698 "status": "finished", 00:29:58.698 "queue_depth": 16, 00:29:58.698 "io_size": 131072, 00:29:58.698 "runtime": 2.004015, 00:29:58.698 "iops": 6829.290199923653, 00:29:58.698 "mibps": 853.6612749904566, 00:29:58.698 "io_failed": 0, 00:29:58.698 "io_timeout": 0, 00:29:58.698 "avg_latency_us": 2338.136190134577, 00:29:58.698 "min_latency_us": 1772.4509090909091, 00:29:58.698 "max_latency_us": 5302.458181818181 00:29:58.698 } 00:29:58.698 ], 00:29:58.698 "core_count": 1 00:29:58.698 } 00:29:58.698 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:58.698 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:58.698 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:58.698 | .driver_specific 00:29:58.698 | .nvme_error 00:29:58.698 | .status_code 00:29:58.698 | .command_transient_transport_error' 00:29:58.698 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 441 > 0 )) 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115128 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115128 ']' 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115128 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115128 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:58.957 killing process with pid 115128 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115128' 00:29:58.957 Received shutdown signal, test time was about 2.000000 seconds 00:29:58.957 00:29:58.957 Latency(us) 00:29:58.957 [2024-12-14T18:50:14.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.957 [2024-12-14T18:50:14.710Z] =================================================================================================================== 00:29:58.957 [2024-12-14T18:50:14.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115128 00:29:58.957 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115128 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 114871 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 114871 ']' 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 114871 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114871 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:59.216 killing process with pid 114871 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114871' 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 114871 00:29:59.216 18:50:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 114871 00:29:59.474 00:29:59.474 real 0m16.302s 00:29:59.474 user 0m30.998s 00:29:59.474 sys 0m4.644s 00:29:59.474 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.474 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.474 ************************************ 00:29:59.474 END TEST nvmf_digest_error 00:29:59.474 ************************************ 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.733 rmmod nvme_tcp 00:29:59.733 rmmod nvme_fabrics 00:29:59.733 rmmod nvme_keyring 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 114871 ']' 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 114871 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 114871 ']' 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 114871 00:29:59.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (114871) - No such process 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 114871 is not found' 00:29:59.733 Process with pid 114871 is not found 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:59.733 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.991 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:29:59.991 ************************************ 00:29:59.991 END TEST nvmf_digest 00:29:59.992 ************************************ 00:29:59.992 00:29:59.992 real 0m36.195s 00:29:59.992 user 1m7.746s 00:29:59.992 sys 0m9.965s 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.992 ************************************ 00:29:59.992 START TEST nvmf_mdns_discovery 00:29:59.992 ************************************ 00:29:59.992 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:30:00.251 * Looking for test storage... 00:30:00.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:00.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.251 --rc genhtml_branch_coverage=1 00:30:00.251 --rc genhtml_function_coverage=1 00:30:00.251 --rc genhtml_legend=1 00:30:00.251 --rc geninfo_all_blocks=1 00:30:00.251 --rc geninfo_unexecuted_blocks=1 00:30:00.251 00:30:00.251 ' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:00.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.251 --rc genhtml_branch_coverage=1 00:30:00.251 --rc genhtml_function_coverage=1 00:30:00.251 --rc genhtml_legend=1 00:30:00.251 --rc geninfo_all_blocks=1 00:30:00.251 --rc geninfo_unexecuted_blocks=1 00:30:00.251 00:30:00.251 ' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:00.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.251 --rc genhtml_branch_coverage=1 00:30:00.251 --rc genhtml_function_coverage=1 00:30:00.251 --rc genhtml_legend=1 00:30:00.251 --rc geninfo_all_blocks=1 00:30:00.251 --rc geninfo_unexecuted_blocks=1 00:30:00.251 00:30:00.251 ' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:00.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.251 --rc genhtml_branch_coverage=1 00:30:00.251 --rc genhtml_function_coverage=1 00:30:00.251 --rc genhtml_legend=1 00:30:00.251 --rc geninfo_all_blocks=1 00:30:00.251 --rc geninfo_unexecuted_blocks=1 00:30:00.251 00:30:00.251 ' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.251 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.252 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:00.252 Cannot find device "nvmf_init_br" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:00.252 Cannot find device "nvmf_init_br2" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:00.252 Cannot find device "nvmf_tgt_br" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:00.252 Cannot find device "nvmf_tgt_br2" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:00.252 Cannot find device "nvmf_init_br" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:00.252 Cannot find device "nvmf_init_br2" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:00.252 Cannot find device "nvmf_tgt_br" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:00.252 Cannot find device "nvmf_tgt_br2" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:00.252 Cannot find device "nvmf_br" 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:30:00.252 18:50:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:00.511 Cannot find device "nvmf_init_if" 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:00.511 Cannot find device "nvmf_init_if2" 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:00.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:00.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:00.511 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:00.512 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:00.512 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:30:00.512 00:30:00.512 --- 10.0.0.3 ping statistics --- 00:30:00.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.512 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:00.512 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:00.512 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:30:00.512 00:30:00.512 --- 10.0.0.4 ping statistics --- 00:30:00.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.512 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:00.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:30:00.512 00:30:00.512 --- 10.0.0.1 ping statistics --- 00:30:00.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.512 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:30:00.512 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:00.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:30:00.771 00:30:00.771 --- 10.0.0.2 ping statistics --- 00:30:00.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.771 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@457 -- # return 0 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@505 -- # nvmfpid=115468 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@506 -- # waitforlisten 115468 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 115468 ']' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.771 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.771 [2024-12-14 18:50:16.368635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:00.771 [2024-12-14 18:50:16.368725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.771 [2024-12-14 18:50:16.501671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.030 [2024-12-14 18:50:16.581329] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.030 [2024-12-14 18:50:16.581390] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.030 [2024-12-14 18:50:16.581416] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.030 [2024-12-14 18:50:16.581424] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.030 [2024-12-14 18:50:16.581431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.030 [2024-12-14 18:50:16.581460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.030 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 [2024-12-14 18:50:16.806113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 [2024-12-14 18:50:16.814257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 null0 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 null1 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 null2 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 null3 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=115510 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 115510 /tmp/host.sock 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 115510 ']' 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:01.289 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:01.289 18:50:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.289 [2024-12-14 18:50:16.933465] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:01.289 [2024-12-14 18:50:16.933600] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115510 ] 00:30:01.548 [2024-12-14 18:50:17.077057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.548 [2024-12-14 18:50:17.160354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=115521 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:30:01.807 18:50:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:30:01.807 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:30:01.807 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:30:01.807 Successfully dropped root privileges. 00:30:01.807 avahi-daemon 0.8 starting up. 00:30:01.807 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:30:01.807 Successfully called chroot(). 00:30:01.807 Successfully dropped remaining capabilities. 00:30:01.807 No service file found in /etc/avahi/services. 00:30:02.744 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:30:02.744 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:30:02.744 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:30:02.744 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:30:02.744 Network interface enumeration completed. 00:30:02.744 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:30:02.744 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:30:02.744 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:30:02.744 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:30:02.744 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3108545770. 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:02.744 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:03.004 [2024-12-14 18:50:18.749332] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:30:03.004 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 [2024-12-14 18:50:18.802639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.263 18:50:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:30:04.199 [2024-12-14 18:50:19.649334] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:30:04.458 [2024-12-14 18:50:20.049357] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:04.458 [2024-12-14 18:50:20.049412] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:04.458 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:04.458 cookie is 0 00:30:04.458 is_local: 1 00:30:04.458 our_own: 0 00:30:04.458 wide_area: 0 00:30:04.458 multicast: 1 00:30:04.458 cached: 1 00:30:04.458 [2024-12-14 18:50:20.149336] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:04.458 [2024-12-14 18:50:20.149361] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:04.458 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:04.458 cookie is 0 00:30:04.458 is_local: 1 00:30:04.458 our_own: 0 00:30:04.458 wide_area: 0 00:30:04.458 multicast: 1 00:30:04.458 cached: 1 00:30:05.394 [2024-12-14 18:50:21.050540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.394 [2024-12-14 18:50:21.050629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1742190 with addr=10.0.0.4, port=8009 00:30:05.394 [2024-12-14 18:50:21.050672] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.394 [2024-12-14 18:50:21.050685] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.394 [2024-12-14 18:50:21.050695] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:05.652 [2024-12-14 18:50:21.162455] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:30:05.652 [2024-12-14 18:50:21.162497] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:30:05.652 [2024-12-14 18:50:21.162515] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:05.652 [2024-12-14 18:50:21.248550] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:30:05.652 [2024-12-14 18:50:21.305359] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:05.652 [2024-12-14 18:50:21.305387] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:06.587 [2024-12-14 18:50:22.050394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.587 [2024-12-14 18:50:22.050451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1779590 with addr=10.0.0.4, port=8009 00:30:06.587 [2024-12-14 18:50:22.050466] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:06.587 [2024-12-14 18:50:22.050474] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:06.587 [2024-12-14 18:50:22.050481] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:07.534 [2024-12-14 18:50:23.050385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.534 [2024-12-14 18:50:23.050441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1741dd0 with addr=10.0.0.4, port=8009 00:30:07.534 [2024-12-14 18:50:23.050455] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:07.534 [2024-12-14 18:50:23.050463] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:07.534 [2024-12-14 18:50:23.050471] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:08.470 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:08.470 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:08.470 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.470 [2024-12-14 18:50:23.892144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:30:08.470 [2024-12-14 18:50:23.894592] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:08.470 [2024-12-14 18:50:23.894643] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.470 [2024-12-14 18:50:23.900112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:30:08.470 [2024-12-14 18:50:23.900604] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.470 18:50:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:30:08.470 [2024-12-14 18:50:24.032695] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:08.470 [2024-12-14 18:50:24.032744] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:08.470 [2024-12-14 18:50:24.062315] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:30:08.470 [2024-12-14 18:50:24.062338] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:30:08.470 [2024-12-14 18:50:24.062369] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:08.470 [2024-12-14 18:50:24.119455] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:08.470 [2024-12-14 18:50:24.148447] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:30:08.470 [2024-12-14 18:50:24.204313] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:08.470 [2024-12-14 18:50:24.204340] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:09.406 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:09.406 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:09.406 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:09.406 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:09.406 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:09.406 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:09.406 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:09.406 18:50:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:09.406 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.664 18:50:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:30:09.664 [2024-12-14 18:50:25.349350] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:09.664 [2024-12-14 18:50:25.349374] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:09.664 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:09.664 cookie is 0 00:30:09.664 is_local: 1 00:30:09.664 our_own: 0 00:30:09.664 wide_area: 0 00:30:09.664 multicast: 1 00:30:09.664 cached: 1 00:30:09.664 [2024-12-14 18:50:25.349400] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:30:09.922 [2024-12-14 18:50:25.449349] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:09.922 [2024-12-14 18:50:25.449375] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:09.922 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:09.922 cookie is 0 00:30:09.922 is_local: 1 00:30:09.922 our_own: 0 00:30:09.922 wide_area: 0 00:30:09.922 multicast: 1 00:30:09.922 cached: 1 00:30:09.922 [2024-12-14 18:50:25.449400] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.857 [2024-12-14 18:50:26.473172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:10.857 [2024-12-14 18:50:26.474259] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:10.857 [2024-12-14 18:50:26.474304] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:10.857 [2024-12-14 18:50:26.474344] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:10.857 [2024-12-14 18:50:26.474356] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.857 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.857 [2024-12-14 18:50:26.485165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:30:10.857 [2024-12-14 18:50:26.486276] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:10.858 [2024-12-14 18:50:26.486344] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:10.858 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.858 18:50:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:30:11.116 [2024-12-14 18:50:26.617489] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:30:11.117 [2024-12-14 18:50:26.617784] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:30:11.117 [2024-12-14 18:50:26.676126] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:11.117 [2024-12-14 18:50:26.676149] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:11.117 [2024-12-14 18:50:26.676171] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:11.117 [2024-12-14 18:50:26.676187] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:11.117 [2024-12-14 18:50:26.676848] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:11.117 [2024-12-14 18:50:26.676867] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:11.117 [2024-12-14 18:50:26.676888] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:11.117 [2024-12-14 18:50:26.676902] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:11.117 [2024-12-14 18:50:26.722572] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:11.117 [2024-12-14 18:50:26.722595] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:11.117 [2024-12-14 18:50:26.722657] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:11.117 [2024-12-14 18:50:26.722665] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.053 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.314 [2024-12-14 18:50:27.810089] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:12.314 [2024-12-14 18:50:27.810116] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:12.314 [2024-12-14 18:50:27.810144] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:12.314 [2024-12-14 18:50:27.810155] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:12.314 [2024-12-14 18:50:27.813511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.813538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.813549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.813557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.813565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.813573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.813581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.813623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.314 [2024-12-14 18:50:27.822108] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:12.314 [2024-12-14 18:50:27.822178] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:12.314 [2024-12-14 18:50:27.823480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.314 [2024-12-14 18:50:27.824326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.824349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.824360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.824377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.824385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.824393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.314 [2024-12-14 18:50:27.824400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.314 [2024-12-14 18:50:27.824408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.314 18:50:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:30:12.314 [2024-12-14 18:50:27.833498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.314 [2024-12-14 18:50:27.833594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.314 [2024-12-14 18:50:27.833624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.314 [2024-12-14 18:50:27.833652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.314 [2024-12-14 18:50:27.833668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.314 [2024-12-14 18:50:27.833681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.314 [2024-12-14 18:50:27.833690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.314 [2024-12-14 18:50:27.833699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.314 [2024-12-14 18:50:27.833714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.314 [2024-12-14 18:50:27.834295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.314 [2024-12-14 18:50:27.843544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.314 [2024-12-14 18:50:27.843646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.314 [2024-12-14 18:50:27.843666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.314 [2024-12-14 18:50:27.843676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.314 [2024-12-14 18:50:27.843690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.314 [2024-12-14 18:50:27.843703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.314 [2024-12-14 18:50:27.843711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.314 [2024-12-14 18:50:27.843720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.314 [2024-12-14 18:50:27.843733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.314 [2024-12-14 18:50:27.844304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.314 [2024-12-14 18:50:27.844371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.314 [2024-12-14 18:50:27.844388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.314 [2024-12-14 18:50:27.844397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.314 [2024-12-14 18:50:27.844410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.314 [2024-12-14 18:50:27.844423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.314 [2024-12-14 18:50:27.844430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.314 [2024-12-14 18:50:27.844438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.314 [2024-12-14 18:50:27.844451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.314 [2024-12-14 18:50:27.853585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.314 [2024-12-14 18:50:27.853665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.314 [2024-12-14 18:50:27.853683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.314 [2024-12-14 18:50:27.853693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.314 [2024-12-14 18:50:27.853706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.853719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.853727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.853734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.315 [2024-12-14 18:50:27.853747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.854345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.315 [2024-12-14 18:50:27.854400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.854417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.315 [2024-12-14 18:50:27.854426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.854439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.854451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.854459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.854466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.315 [2024-12-14 18:50:27.854479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.863636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.315 [2024-12-14 18:50:27.863702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.863719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.315 [2024-12-14 18:50:27.863728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.863742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.863754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.863762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.863770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.315 [2024-12-14 18:50:27.863783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.864371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.315 [2024-12-14 18:50:27.864430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.864446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.315 [2024-12-14 18:50:27.864455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.864468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.864480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.864487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.864495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.315 [2024-12-14 18:50:27.864507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.873679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.315 [2024-12-14 18:50:27.873750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.873768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.315 [2024-12-14 18:50:27.873777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.873791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.873805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.873812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.873820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.315 [2024-12-14 18:50:27.873833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.874405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.315 [2024-12-14 18:50:27.874465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.874482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.315 [2024-12-14 18:50:27.874492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.874504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.874516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.874524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.874532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.315 [2024-12-14 18:50:27.874544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.883723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.315 [2024-12-14 18:50:27.883788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.883805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.315 [2024-12-14 18:50:27.883814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.883828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.883840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.883847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.883855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.315 [2024-12-14 18:50:27.883868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.884438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.315 [2024-12-14 18:50:27.884495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.884511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.315 [2024-12-14 18:50:27.884521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.884533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.884545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.884553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.884561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.315 [2024-12-14 18:50:27.884573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.893764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.315 [2024-12-14 18:50:27.893828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.893845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.315 [2024-12-14 18:50:27.893854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.893867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.893879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.893887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.893895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.315 [2024-12-14 18:50:27.893907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.894471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.315 [2024-12-14 18:50:27.894523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.894539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.315 [2024-12-14 18:50:27.894548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.894561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.894572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.894580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.894588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.315 [2024-12-14 18:50:27.894613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.315 [2024-12-14 18:50:27.903805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.315 [2024-12-14 18:50:27.903868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.315 [2024-12-14 18:50:27.903884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.315 [2024-12-14 18:50:27.903894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.315 [2024-12-14 18:50:27.903907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.315 [2024-12-14 18:50:27.903919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.315 [2024-12-14 18:50:27.903926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.315 [2024-12-14 18:50:27.903934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.316 [2024-12-14 18:50:27.903947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.904499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.316 [2024-12-14 18:50:27.904557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.904573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.316 [2024-12-14 18:50:27.904582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.904595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.904628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.904655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.904663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.316 [2024-12-14 18:50:27.904676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.913845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.316 [2024-12-14 18:50:27.913934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.913953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.316 [2024-12-14 18:50:27.913962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.913977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.913990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.914000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.914008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.316 [2024-12-14 18:50:27.914021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.914532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.316 [2024-12-14 18:50:27.914590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.914620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.316 [2024-12-14 18:50:27.914630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.914644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.914656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.914664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.914672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.316 [2024-12-14 18:50:27.914685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.923905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.316 [2024-12-14 18:50:27.923968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.923985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.316 [2024-12-14 18:50:27.923994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.924007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.924019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.924026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.924035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.316 [2024-12-14 18:50:27.924047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.924566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.316 [2024-12-14 18:50:27.924648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.924666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.316 [2024-12-14 18:50:27.924676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.924690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.924711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.924721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.924729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.316 [2024-12-14 18:50:27.924742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.933946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.316 [2024-12-14 18:50:27.934008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.934025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.316 [2024-12-14 18:50:27.934034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.934047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.934059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.934067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.934077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.316 [2024-12-14 18:50:27.934089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.934611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.316 [2024-12-14 18:50:27.934670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.934686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.316 [2024-12-14 18:50:27.934695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.934708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.934720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.934728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.934736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.316 [2024-12-14 18:50:27.934748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.943986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.316 [2024-12-14 18:50:27.944049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.944066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756170 with addr=10.0.0.3, port=4420 00:30:12.316 [2024-12-14 18:50:27.944075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756170 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.944088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756170 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.944101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.944108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.944118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:12.316 [2024-12-14 18:50:27.944131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.944646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:12.316 [2024-12-14 18:50:27.944705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.316 [2024-12-14 18:50:27.944721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763df0 with addr=10.0.0.4, port=4420 00:30:12.316 [2024-12-14 18:50:27.944731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1763df0 is same with the state(6) to be set 00:30:12.316 [2024-12-14 18:50:27.944744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1763df0 (9): Bad file descriptor 00:30:12.316 [2024-12-14 18:50:27.944765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:12.316 [2024-12-14 18:50:27.944774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:12.316 [2024-12-14 18:50:27.944782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:12.316 [2024-12-14 18:50:27.944794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.316 [2024-12-14 18:50:27.952198] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:30:12.316 [2024-12-14 18:50:27.952224] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:12.316 [2024-12-14 18:50:27.952241] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:12.316 [2024-12-14 18:50:27.953191] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:30:12.316 [2024-12-14 18:50:27.953213] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:12.316 [2024-12-14 18:50:27.953228] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:12.316 [2024-12-14 18:50:28.038272] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:12.316 [2024-12-14 18:50:28.039264] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.253 18:50:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.511 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.512 18:50:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:30:13.512 [2024-12-14 18:50:29.149348] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:14.446 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.705 [2024-12-14 18:50:30.369343] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:30:14.705 2024/12/14 18:50:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:30:14.705 request: 00:30:14.705 { 00:30:14.705 "method": "bdev_nvme_start_mdns_discovery", 00:30:14.705 "params": { 00:30:14.705 "name": "mdns", 00:30:14.705 "svcname": "_nvme-disc._http", 00:30:14.705 "hostnqn": "nqn.2021-12.io.spdk:test" 00:30:14.705 } 00:30:14.705 } 00:30:14.705 Got JSON-RPC error response 00:30:14.705 GoRPCClient: error on JSON-RPC call 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:14.705 18:50:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:30:15.272 [2024-12-14 18:50:30.958086] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:30:15.531 [2024-12-14 18:50:31.058085] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:30:15.531 [2024-12-14 18:50:31.158092] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:15.531 [2024-12-14 18:50:31.158112] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:15.531 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:15.531 cookie is 0 00:30:15.531 is_local: 1 00:30:15.531 our_own: 0 00:30:15.531 wide_area: 0 00:30:15.531 multicast: 1 00:30:15.531 cached: 1 00:30:15.531 [2024-12-14 18:50:31.258091] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:15.531 [2024-12-14 18:50:31.258111] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:15.531 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:15.531 cookie is 0 00:30:15.531 is_local: 1 00:30:15.531 our_own: 0 00:30:15.531 wide_area: 0 00:30:15.531 multicast: 1 00:30:15.531 cached: 1 00:30:15.531 [2024-12-14 18:50:31.258121] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:30:15.790 [2024-12-14 18:50:31.358092] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:15.790 [2024-12-14 18:50:31.358111] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:15.790 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:15.790 cookie is 0 00:30:15.790 is_local: 1 00:30:15.790 our_own: 0 00:30:15.790 wide_area: 0 00:30:15.790 multicast: 1 00:30:15.790 cached: 1 00:30:15.790 [2024-12-14 18:50:31.458092] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:15.790 [2024-12-14 18:50:31.458112] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:15.790 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:15.790 cookie is 0 00:30:15.790 is_local: 1 00:30:15.790 our_own: 0 00:30:15.790 wide_area: 0 00:30:15.790 multicast: 1 00:30:15.790 cached: 1 00:30:15.790 [2024-12-14 18:50:31.458121] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:30:16.726 [2024-12-14 18:50:32.163688] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:30:16.726 [2024-12-14 18:50:32.163709] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:30:16.726 [2024-12-14 18:50:32.163725] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:16.726 [2024-12-14 18:50:32.249775] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:30:16.726 [2024-12-14 18:50:32.310104] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:16.726 [2024-12-14 18:50:32.310130] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:16.726 [2024-12-14 18:50:32.363397] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:30:16.726 [2024-12-14 18:50:32.363417] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:30:16.726 [2024-12-14 18:50:32.363432] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:16.726 [2024-12-14 18:50:32.449487] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:30:16.984 [2024-12-14 18:50:32.509209] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:16.984 [2024-12-14 18:50:32.509235] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.275 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 [2024-12-14 18:50:35.563719] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:30:20.276 2024/12/14 18:50:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:30:20.276 request: 00:30:20.276 { 00:30:20.276 "method": "bdev_nvme_start_mdns_discovery", 00:30:20.276 "params": { 00:30:20.276 "name": "cdc", 00:30:20.276 "svcname": "_nvme-disc._tcp", 00:30:20.276 "hostnqn": "nqn.2021-12.io.spdk:test" 00:30:20.276 } 00:30:20.276 } 00:30:20.276 Got JSON-RPC error response 00:30:20.276 GoRPCClient: error on JSON-RPC call 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:20.276 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:20.276 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:20.276 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:20.276 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:20.276 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:20.276 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:20.276 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:30:20.276 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:30:20.277 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.277 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.277 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.277 18:50:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:30:20.277 [2024-12-14 18:50:35.758089] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:21.239 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:21.239 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:21.239 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 115510 00:30:21.239 18:50:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 115510 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 115521 00:30:21.499 Got SIGTERM, quitting. 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:30:21.499 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:21.499 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:30:21.499 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:30:21.499 avahi-daemon 0.8 exiting. 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.499 rmmod nvme_tcp 00:30:21.499 rmmod nvme_fabrics 00:30:21.499 rmmod nvme_keyring 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@513 -- # '[' -n 115468 ']' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@514 -- # killprocess 115468 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 115468 ']' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 115468 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115468 00:30:21.499 killing process with pid 115468 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115468' 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 115468 00:30:21.499 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 115468 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # iptables-save 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:21.758 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:30:22.017 00:30:22.017 real 0m22.068s 00:30:22.017 user 0m42.921s 00:30:22.017 sys 0m2.189s 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:22.017 18:50:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.017 ************************************ 00:30:22.017 END TEST nvmf_mdns_discovery 00:30:22.017 ************************************ 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.277 ************************************ 00:30:22.277 START TEST nvmf_host_multipath 00:30:22.277 ************************************ 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:30:22.277 * Looking for test storage... 00:30:22.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.277 --rc genhtml_branch_coverage=1 00:30:22.277 --rc genhtml_function_coverage=1 00:30:22.277 --rc genhtml_legend=1 00:30:22.277 --rc geninfo_all_blocks=1 00:30:22.277 --rc geninfo_unexecuted_blocks=1 00:30:22.277 00:30:22.277 ' 00:30:22.277 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.278 --rc genhtml_branch_coverage=1 00:30:22.278 --rc genhtml_function_coverage=1 00:30:22.278 --rc genhtml_legend=1 00:30:22.278 --rc geninfo_all_blocks=1 00:30:22.278 --rc geninfo_unexecuted_blocks=1 00:30:22.278 00:30:22.278 ' 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.278 --rc genhtml_branch_coverage=1 00:30:22.278 --rc genhtml_function_coverage=1 00:30:22.278 --rc genhtml_legend=1 00:30:22.278 --rc geninfo_all_blocks=1 00:30:22.278 --rc geninfo_unexecuted_blocks=1 00:30:22.278 00:30:22.278 ' 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.278 --rc genhtml_branch_coverage=1 00:30:22.278 --rc genhtml_function_coverage=1 00:30:22.278 --rc genhtml_legend=1 00:30:22.278 --rc geninfo_all_blocks=1 00:30:22.278 --rc geninfo_unexecuted_blocks=1 00:30:22.278 00:30:22.278 ' 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.278 18:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:22.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:22.278 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:22.537 Cannot find device "nvmf_init_br" 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:22.537 Cannot find device "nvmf_init_br2" 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:22.537 Cannot find device "nvmf_tgt_br" 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:22.537 Cannot find device "nvmf_tgt_br2" 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:30:22.537 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:22.538 Cannot find device "nvmf_init_br" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:22.538 Cannot find device "nvmf_init_br2" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:22.538 Cannot find device "nvmf_tgt_br" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:22.538 Cannot find device "nvmf_tgt_br2" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:22.538 Cannot find device "nvmf_br" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:22.538 Cannot find device "nvmf_init_if" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:22.538 Cannot find device "nvmf_init_if2" 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:22.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:22.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:22.538 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:22.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:22.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:30:22.797 00:30:22.797 --- 10.0.0.3 ping statistics --- 00:30:22.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.797 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:22.797 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:22.797 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:30:22.797 00:30:22.797 --- 10.0.0.4 ping statistics --- 00:30:22.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.797 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:22.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:30:22.797 00:30:22.797 --- 10.0.0.1 ping statistics --- 00:30:22.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.797 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:22.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:30:22.797 00:30:22.797 --- 10.0.0.2 ping statistics --- 00:30:22.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.797 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=116167 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 116167 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 116167 ']' 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.797 18:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:22.797 [2024-12-14 18:50:38.446987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:22.797 [2024-12-14 18:50:38.447073] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.056 [2024-12-14 18:50:38.581724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:23.056 [2024-12-14 18:50:38.664271] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.056 [2024-12-14 18:50:38.664682] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.056 [2024-12-14 18:50:38.664715] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.056 [2024-12-14 18:50:38.664726] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.056 [2024-12-14 18:50:38.664733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.056 [2024-12-14 18:50:38.664830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.056 [2024-12-14 18:50:38.665093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=116167 00:30:23.992 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:23.992 [2024-12-14 18:50:39.723109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.251 18:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:24.509 Malloc0 00:30:24.509 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:24.768 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.768 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:25.026 [2024-12-14 18:50:40.683050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:25.026 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:30:25.285 [2024-12-14 18:50:40.895042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=116271 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:25.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 116271 /var/tmp/bdevperf.sock 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 116271 ']' 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.285 18:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:25.853 18:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.853 18:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:30:25.853 18:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:26.112 18:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:26.371 Nvme0n1 00:30:26.371 18:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:26.629 Nvme0n1 00:30:26.629 18:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:30:26.629 18:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:28.006 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:30:28.006 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:28.006 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:28.265 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:30:28.265 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116341 00:30:28.265 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:28.265 18:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:34.829 18:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:34.829 18:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:34.829 Attaching 4 probes... 00:30:34.829 @path[10.0.0.3, 4421]: 20771 00:30:34.829 @path[10.0.0.3, 4421]: 21439 00:30:34.829 @path[10.0.0.3, 4421]: 21194 00:30:34.829 @path[10.0.0.3, 4421]: 21238 00:30:34.829 @path[10.0.0.3, 4421]: 21235 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116341 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:34.829 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:30:35.106 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:30:35.106 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:35.106 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116472 00:30:35.106 18:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:41.685 18:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:41.685 18:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:41.685 Attaching 4 probes... 00:30:41.685 @path[10.0.0.3, 4420]: 20541 00:30:41.685 @path[10.0.0.3, 4420]: 21062 00:30:41.685 @path[10.0.0.3, 4420]: 20907 00:30:41.685 @path[10.0.0.3, 4420]: 20943 00:30:41.685 @path[10.0.0.3, 4420]: 21125 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116472 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:41.685 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:41.944 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:30:41.944 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:41.944 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116603 00:30:41.944 18:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:48.509 Attaching 4 probes... 00:30:48.509 @path[10.0.0.3, 4421]: 16004 00:30:48.509 @path[10.0.0.3, 4421]: 20562 00:30:48.509 @path[10.0.0.3, 4421]: 19886 00:30:48.509 @path[10.0.0.3, 4421]: 20198 00:30:48.509 @path[10.0.0.3, 4421]: 19489 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116603 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:30:48.509 18:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:48.509 18:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:30:48.768 18:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:30:48.768 18:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:48.768 18:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116728 00:30:48.768 18:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:55.342 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:55.342 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:30:55.342 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:30:55.342 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:55.342 Attaching 4 probes... 00:30:55.343 00:30:55.343 00:30:55.343 00:30:55.343 00:30:55.343 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116728 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:55.343 18:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:55.343 18:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:30:55.343 18:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116863 00:30:55.343 18:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:55.343 18:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:01.940 Attaching 4 probes... 00:31:01.940 @path[10.0.0.3, 4421]: 20351 00:31:01.940 @path[10.0.0.3, 4421]: 20728 00:31:01.940 @path[10.0.0.3, 4421]: 20516 00:31:01.940 @path[10.0.0.3, 4421]: 20435 00:31:01.940 @path[10.0.0.3, 4421]: 20468 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116863 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:01.940 [2024-12-14 18:51:17.602893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.602964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.602975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.602983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.602991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.603002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.603010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 [2024-12-14 18:51:17.603017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee690 is same with the state(6) to be set 00:31:01.940 18:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:31:03.318 18:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:31:03.318 18:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116989 00:31:03.318 18:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:03.318 18:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:09.882 Attaching 4 probes... 00:31:09.882 @path[10.0.0.3, 4420]: 19710 00:31:09.882 @path[10.0.0.3, 4420]: 20105 00:31:09.882 @path[10.0.0.3, 4420]: 20370 00:31:09.882 @path[10.0.0.3, 4420]: 20292 00:31:09.882 @path[10.0.0.3, 4420]: 20453 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:09.882 18:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116989 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:09.882 [2024-12-14 18:51:25.308571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:09.882 18:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:31:16.456 18:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:31:16.456 18:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117181 00:31:16.456 18:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116167 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:16.456 18:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:23.045 Attaching 4 probes... 00:31:23.045 @path[10.0.0.3, 4421]: 18991 00:31:23.045 @path[10.0.0.3, 4421]: 18905 00:31:23.045 @path[10.0.0.3, 4421]: 18878 00:31:23.045 @path[10.0.0.3, 4421]: 18856 00:31:23.045 @path[10.0.0.3, 4421]: 19495 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117181 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 116271 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 116271 ']' 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 116271 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.045 18:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116271 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:23.045 killing process with pid 116271 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116271' 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 116271 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 116271 00:31:23.045 { 00:31:23.045 "results": [ 00:31:23.045 { 00:31:23.045 "job": "Nvme0n1", 00:31:23.045 "core_mask": "0x4", 00:31:23.045 "workload": "verify", 00:31:23.045 "status": "terminated", 00:31:23.045 "verify_range": { 00:31:23.045 "start": 0, 00:31:23.045 "length": 16384 00:31:23.045 }, 00:31:23.045 "queue_depth": 128, 00:31:23.045 "io_size": 4096, 00:31:23.045 "runtime": 55.541587, 00:31:23.045 "iops": 8724.147547314411, 00:31:23.045 "mibps": 34.07870135669692, 00:31:23.045 "io_failed": 0, 00:31:23.045 "io_timeout": 0, 00:31:23.045 "avg_latency_us": 14643.8766478946, 00:31:23.045 "min_latency_us": 1169.2218181818182, 00:31:23.045 "max_latency_us": 7015926.69090909 00:31:23.045 } 00:31:23.045 ], 00:31:23.045 "core_count": 1 00:31:23.045 } 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 116271 00:31:23.045 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:23.045 [2024-12-14 18:50:40.958904] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:23.045 [2024-12-14 18:50:40.958987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116271 ] 00:31:23.045 [2024-12-14 18:50:41.089166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.045 [2024-12-14 18:50:41.161916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.045 [2024-12-14 18:50:42.272022] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:31:23.045 Running I/O for 90 seconds... 00:31:23.045 10991.00 IOPS, 42.93 MiB/s [2024-12-14T18:51:38.798Z] 10979.00 IOPS, 42.89 MiB/s [2024-12-14T18:51:38.798Z] 10863.00 IOPS, 42.43 MiB/s [2024-12-14T18:51:38.798Z] 10821.75 IOPS, 42.27 MiB/s [2024-12-14T18:51:38.798Z] 10795.20 IOPS, 42.17 MiB/s [2024-12-14T18:51:38.798Z] 10760.83 IOPS, 42.03 MiB/s [2024-12-14T18:51:38.798Z] 10738.71 IOPS, 41.95 MiB/s [2024-12-14T18:51:38.798Z] 10714.00 IOPS, 41.85 MiB/s [2024-12-14T18:51:38.798Z] [2024-12-14 18:50:50.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.727960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:23.045 [2024-12-14 18:50:50.728538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.045 [2024-12-14 18:50:50.728552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.728996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.729953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.729980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.046 [2024-12-14 18:50:50.730353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.046 [2024-12-14 18:50:50.730385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.046 [2024-12-14 18:50:50.730419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:23.046 [2024-12-14 18:50:50.730438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.730966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:23.047 [2024-12-14 18:50:50.731845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.047 [2024-12-14 18:50:50.731859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.731912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.731926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.731946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.731972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.731991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.732957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.732971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:50.733786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:50.733812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:23.048 10675.44 IOPS, 41.70 MiB/s [2024-12-14T18:51:38.801Z] 10657.60 IOPS, 41.63 MiB/s [2024-12-14T18:51:38.801Z] 10644.00 IOPS, 41.58 MiB/s [2024-12-14T18:51:38.801Z] 10627.00 IOPS, 41.51 MiB/s [2024-12-14T18:51:38.801Z] 10621.15 IOPS, 41.49 MiB/s [2024-12-14T18:51:38.801Z] 10615.71 IOPS, 41.47 MiB/s [2024-12-14T18:51:38.801Z] [2024-12-14 18:50:57.269986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.048 [2024-12-14 18:50:57.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.048 [2024-12-14 18:50:57.270144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.048 [2024-12-14 18:50:57.270180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.048 [2024-12-14 18:50:57.270253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.048 [2024-12-14 18:50:57.270286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.048 [2024-12-14 18:50:57.270318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:23.048 [2024-12-14 18:50:57.270337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.270914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.270928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.272448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.272462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.049 [2024-12-14 18:50:57.273339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.049 [2024-12-14 18:50:57.273376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:23.049 [2024-12-14 18:50:57.273399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.049 [2024-12-14 18:50:57.273420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.273966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.273989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:50:57.274518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:50:57.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:23.050 10461.87 IOPS, 40.87 MiB/s [2024-12-14T18:51:38.803Z] 9946.62 IOPS, 38.85 MiB/s [2024-12-14T18:51:38.803Z] 9972.88 IOPS, 38.96 MiB/s [2024-12-14T18:51:38.803Z] 9981.28 IOPS, 38.99 MiB/s [2024-12-14T18:51:38.803Z] 9984.00 IOPS, 39.00 MiB/s [2024-12-14T18:51:38.803Z] 9976.05 IOPS, 38.97 MiB/s [2024-12-14T18:51:38.803Z] 9976.43 IOPS, 38.97 MiB/s [2024-12-14T18:51:38.803Z] [2024-12-14 18:51:04.264028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.050 [2024-12-14 18:51:04.264104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.050 [2024-12-14 18:51:04.264559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:23.050 [2024-12-14 18:51:04.264577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.264971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.264987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.265214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.265228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.266900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.266954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.266969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.266994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.267008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.267071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.267108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.267144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.051 [2024-12-14 18:51:04.267181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.051 [2024-12-14 18:51:04.267745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:23.051 [2024-12-14 18:51:04.267768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.267964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.267987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:04.268627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.052 [2024-12-14 18:51:04.268649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:23.052 9883.41 IOPS, 38.61 MiB/s [2024-12-14T18:51:38.805Z] 9453.70 IOPS, 36.93 MiB/s [2024-12-14T18:51:38.805Z] 9059.79 IOPS, 35.39 MiB/s [2024-12-14T18:51:38.805Z] 8697.40 IOPS, 33.97 MiB/s [2024-12-14T18:51:38.805Z] 8362.88 IOPS, 32.67 MiB/s [2024-12-14T18:51:38.805Z] 8053.15 IOPS, 31.46 MiB/s [2024-12-14T18:51:38.805Z] 7765.54 IOPS, 30.33 MiB/s [2024-12-14T18:51:38.805Z] 7576.52 IOPS, 29.60 MiB/s [2024-12-14T18:51:38.805Z] 7668.23 IOPS, 29.95 MiB/s [2024-12-14T18:51:38.805Z] 7754.13 IOPS, 30.29 MiB/s [2024-12-14T18:51:38.805Z] 7833.47 IOPS, 30.60 MiB/s [2024-12-14T18:51:38.805Z] 7904.00 IOPS, 30.88 MiB/s [2024-12-14T18:51:38.805Z] 7972.82 IOPS, 31.14 MiB/s [2024-12-14T18:51:38.805Z] 8041.23 IOPS, 31.41 MiB/s [2024-12-14T18:51:38.805Z] [2024-12-14 18:51:17.604052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.052 [2024-12-14 18:51:17.604819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.052 [2024-12-14 18:51:17.604832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.604982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.604995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.053 [2024-12-14 18:51:17.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.053 [2024-12-14 18:51:17.605718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.053 [2024-12-14 18:51:17.605743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.053 [2024-12-14 18:51:17.605768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.053 [2024-12-14 18:51:17.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.605978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.605991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.054 [2024-12-14 18:51:17.606212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.054 [2024-12-14 18:51:17.606836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.054 [2024-12-14 18:51:17.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.606873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.606904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.606918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.606930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.606946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.606958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.606981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.606995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.055 [2024-12-14 18:51:17.607496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33432 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33440 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33448 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32600 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32608 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32616 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32624 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32632 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.055 [2024-12-14 18:51:17.607904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32640 len:8 PRP1 0x0 PRP2 0x0 00:31:23.055 [2024-12-14 18:51:17.607915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.055 [2024-12-14 18:51:17.607927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.055 [2024-12-14 18:51:17.607936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.056 [2024-12-14 18:51:17.607944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32648 len:8 PRP1 0x0 PRP2 0x0 00:31:23.056 [2024-12-14 18:51:17.607956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.607967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:23.056 [2024-12-14 18:51:17.607976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.056 [2024-12-14 18:51:17.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33456 len:8 PRP1 0x0 PRP2 0x0 00:31:23.056 [2024-12-14 18:51:17.607996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608060] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218c180 was disconnected and freed. reset controller. 00:31:23.056 [2024-12-14 18:51:17.608175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.056 [2024-12-14 18:51:17.608197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.056 [2024-12-14 18:51:17.608223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.056 [2024-12-14 18:51:17.608248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.056 [2024-12-14 18:51:17.608279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.056 [2024-12-14 18:51:17.608305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.056 [2024-12-14 18:51:17.608326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182450 is same with the state(6) to be set 00:31:23.056 [2024-12-14 18:51:17.609648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:23.056 [2024-12-14 18:51:17.609688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182450 (9): Bad file descriptor 00:31:23.056 [2024-12-14 18:51:17.609822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.056 [2024-12-14 18:51:17.609850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182450 with addr=10.0.0.3, port=4421 00:31:23.056 [2024-12-14 18:51:17.609866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182450 is same with the state(6) to be set 00:31:23.056 [2024-12-14 18:51:17.609888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182450 (9): Bad file descriptor 00:31:23.056 [2024-12-14 18:51:17.609909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:23.056 [2024-12-14 18:51:17.609922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:23.056 [2024-12-14 18:51:17.609935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:23.056 [2024-12-14 18:51:17.609958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:23.056 [2024-12-14 18:51:17.609972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:23.056 8100.28 IOPS, 31.64 MiB/s [2024-12-14T18:51:38.809Z] 8152.11 IOPS, 31.84 MiB/s [2024-12-14T18:51:38.809Z] 8202.79 IOPS, 32.04 MiB/s [2024-12-14T18:51:38.809Z] 8250.74 IOPS, 32.23 MiB/s [2024-12-14T18:51:38.809Z] 8300.48 IOPS, 32.42 MiB/s [2024-12-14T18:51:38.809Z] 8344.46 IOPS, 32.60 MiB/s [2024-12-14T18:51:38.809Z] 8391.07 IOPS, 32.78 MiB/s [2024-12-14T18:51:38.809Z] 8432.00 IOPS, 32.94 MiB/s [2024-12-14T18:51:38.809Z] 8473.86 IOPS, 33.10 MiB/s [2024-12-14T18:51:38.809Z] 8514.62 IOPS, 33.26 MiB/s [2024-12-14T18:51:38.809Z] [2024-12-14 18:51:27.675303] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:23.056 8548.39 IOPS, 33.39 MiB/s [2024-12-14T18:51:38.809Z] 8580.51 IOPS, 33.52 MiB/s [2024-12-14T18:51:38.809Z] 8598.00 IOPS, 33.59 MiB/s [2024-12-14T18:51:38.809Z] 8619.35 IOPS, 33.67 MiB/s [2024-12-14T18:51:38.809Z] 8636.86 IOPS, 33.74 MiB/s [2024-12-14T18:51:38.809Z] 8653.84 IOPS, 33.80 MiB/s [2024-12-14T18:51:38.809Z] 8671.73 IOPS, 33.87 MiB/s [2024-12-14T18:51:38.809Z] 8687.30 IOPS, 33.93 MiB/s [2024-12-14T18:51:38.809Z] 8701.50 IOPS, 33.99 MiB/s [2024-12-14T18:51:38.809Z] 8720.00 IOPS, 34.06 MiB/s [2024-12-14T18:51:38.809Z] Received shutdown signal, test time was about 55.542470 seconds 00:31:23.056 00:31:23.056 Latency(us) 00:31:23.056 [2024-12-14T18:51:38.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.056 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:23.056 Verification LBA range: start 0x0 length 0x4000 00:31:23.056 Nvme0n1 : 55.54 8724.15 34.08 0.00 0.00 14643.88 1169.22 7015926.69 00:31:23.056 [2024-12-14T18:51:38.809Z] =================================================================================================================== 00:31:23.056 [2024-12-14T18:51:38.809Z] Total : 8724.15 34.08 0.00 0.00 14643.88 1169.22 7015926.69 00:31:23.056 [2024-12-14 18:51:38.030254] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.056 rmmod nvme_tcp 00:31:23.056 rmmod nvme_fabrics 00:31:23.056 rmmod nvme_keyring 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 116167 ']' 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 116167 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 116167 ']' 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 116167 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116167 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:23.056 killing process with pid 116167 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116167' 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 116167 00:31:23.056 18:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 116167 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:31:23.315 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:31:23.574 00:31:23.574 real 1m1.480s 00:31:23.574 user 2m53.682s 00:31:23.574 sys 0m13.677s 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:23.574 ************************************ 00:31:23.574 END TEST nvmf_host_multipath 00:31:23.574 ************************************ 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.574 18:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.833 ************************************ 00:31:23.833 START TEST nvmf_timeout 00:31:23.833 ************************************ 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:31:23.833 * Looking for test storage... 00:31:23.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.833 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:23.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.834 --rc genhtml_branch_coverage=1 00:31:23.834 --rc genhtml_function_coverage=1 00:31:23.834 --rc genhtml_legend=1 00:31:23.834 --rc geninfo_all_blocks=1 00:31:23.834 --rc geninfo_unexecuted_blocks=1 00:31:23.834 00:31:23.834 ' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:23.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.834 --rc genhtml_branch_coverage=1 00:31:23.834 --rc genhtml_function_coverage=1 00:31:23.834 --rc genhtml_legend=1 00:31:23.834 --rc geninfo_all_blocks=1 00:31:23.834 --rc geninfo_unexecuted_blocks=1 00:31:23.834 00:31:23.834 ' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:23.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.834 --rc genhtml_branch_coverage=1 00:31:23.834 --rc genhtml_function_coverage=1 00:31:23.834 --rc genhtml_legend=1 00:31:23.834 --rc geninfo_all_blocks=1 00:31:23.834 --rc geninfo_unexecuted_blocks=1 00:31:23.834 00:31:23.834 ' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:23.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.834 --rc genhtml_branch_coverage=1 00:31:23.834 --rc genhtml_function_coverage=1 00:31:23.834 --rc genhtml_legend=1 00:31:23.834 --rc geninfo_all_blocks=1 00:31:23.834 --rc geninfo_unexecuted_blocks=1 00:31:23.834 00:31:23.834 ' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:31:23.834 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:23.835 Cannot find device "nvmf_init_br" 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:23.835 Cannot find device "nvmf_init_br2" 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:23.835 Cannot find device "nvmf_tgt_br" 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:23.835 Cannot find device "nvmf_tgt_br2" 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:31:23.835 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:24.093 Cannot find device "nvmf_init_br" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:24.093 Cannot find device "nvmf_init_br2" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:24.093 Cannot find device "nvmf_tgt_br" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:24.093 Cannot find device "nvmf_tgt_br2" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:24.093 Cannot find device "nvmf_br" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:24.093 Cannot find device "nvmf_init_if" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:24.093 Cannot find device "nvmf_init_if2" 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:24.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:24.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:24.093 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:24.094 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:24.352 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:24.352 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:24.352 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:24.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:24.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:31:24.353 00:31:24.353 --- 10.0.0.3 ping statistics --- 00:31:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.353 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:24.353 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:24.353 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:31:24.353 00:31:24.353 --- 10.0.0.4 ping statistics --- 00:31:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.353 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:24.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:24.353 00:31:24.353 --- 10.0.0.1 ping statistics --- 00:31:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.353 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:24.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:31:24.353 00:31:24.353 --- 10.0.0.2 ping statistics --- 00:31:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.353 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=117558 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 117558 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 117558 ']' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.353 18:51:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:24.353 [2024-12-14 18:51:39.980580] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:24.353 [2024-12-14 18:51:39.980724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.612 [2024-12-14 18:51:40.120060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.612 [2024-12-14 18:51:40.211323] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.612 [2024-12-14 18:51:40.211370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.612 [2024-12-14 18:51:40.211381] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.612 [2024-12-14 18:51:40.211390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.612 [2024-12-14 18:51:40.211398] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.612 [2024-12-14 18:51:40.211535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.612 [2024-12-14 18:51:40.211544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:25.545 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.804 [2024-12-14 18:51:41.318267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.804 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:26.062 Malloc0 00:31:26.062 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:26.321 18:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:26.579 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:26.579 [2024-12-14 18:51:42.328357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=117655 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 117655 /var/tmp/bdevperf.sock 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 117655 ']' 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.838 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 [2024-12-14 18:51:42.391290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:26.838 [2024-12-14 18:51:42.391360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117655 ] 00:31:26.838 [2024-12-14 18:51:42.529377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.097 [2024-12-14 18:51:42.622616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.097 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:27.097 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:31:27.097 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:27.355 18:51:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:31:27.614 NVMe0n1 00:31:27.614 18:51:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:27.614 18:51:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=117680 00:31:27.614 18:51:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:31:27.614 Running I/O for 10 seconds... 00:31:28.549 18:51:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:29.069 10104.00 IOPS, 39.47 MiB/s [2024-12-14T18:51:44.822Z] [2024-12-14 18:51:44.570409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.570525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81690 is same with the state(6) to be set 00:31:29.069 [2024-12-14 18:51:44.571685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.069 [2024-12-14 18:51:44.571756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.069 [2024-12-14 18:51:44.571780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.069 [2024-12-14 18:51:44.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.069 [2024-12-14 18:51:44.571820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.069 [2024-12-14 18:51:44.571841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.069 [2024-12-14 18:51:44.571850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.571870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.571992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.070 [2024-12-14 18:51:44.572366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.070 [2024-12-14 18:51:44.572734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.070 [2024-12-14 18:51:44.572745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.572991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.572999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.071 [2024-12-14 18:51:44.573179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.071 [2024-12-14 18:51:44.573573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.071 [2024-12-14 18:51:44.573584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.072 [2024-12-14 18:51:44.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.573924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.573933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.573953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.573972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.573980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.573989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.573997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:31:29.072 [2024-12-14 18:51:44.574474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.072 [2024-12-14 18:51:44.574483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.072 [2024-12-14 18:51:44.574490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.072 [2024-12-14 18:51:44.574498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99736 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99744 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99760 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99768 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99792 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.574862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.574871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.073 [2024-12-14 18:51:44.574878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.073 [2024-12-14 18:51:44.574886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99800 len:8 PRP1 0x0 PRP2 0x0 00:31:29.073 [2024-12-14 18:51:44.584858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.584943] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17726e0 was disconnected and freed. reset controller. 00:31:29.073 [2024-12-14 18:51:44.585081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.073 [2024-12-14 18:51:44.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.585118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.073 [2024-12-14 18:51:44.585128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.585138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.073 [2024-12-14 18:51:44.585146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.585156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.073 [2024-12-14 18:51:44.585165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.073 [2024-12-14 18:51:44.585174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fa1c0 is same with the state(6) to be set 00:31:29.073 [2024-12-14 18:51:44.585407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:29.073 [2024-12-14 18:51:44.585437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa1c0 (9): Bad file descriptor 00:31:29.073 [2024-12-14 18:51:44.585541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.073 [2024-12-14 18:51:44.585568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16fa1c0 with addr=10.0.0.3, port=4420 00:31:29.073 [2024-12-14 18:51:44.585581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fa1c0 is same with the state(6) to be set 00:31:29.073 [2024-12-14 18:51:44.585614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa1c0 (9): Bad file descriptor 00:31:29.073 [2024-12-14 18:51:44.585633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:29.073 [2024-12-14 18:51:44.585642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:29.073 [2024-12-14 18:51:44.585652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:29.073 [2024-12-14 18:51:44.585674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:29.073 [2024-12-14 18:51:44.585695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:29.073 18:51:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:31:31.016 6205.50 IOPS, 24.24 MiB/s [2024-12-14T18:51:46.769Z] 4137.00 IOPS, 16.16 MiB/s [2024-12-14T18:51:46.769Z] [2024-12-14 18:51:46.585990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-12-14 18:51:46.586097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16fa1c0 with addr=10.0.0.3, port=4420 00:31:31.016 [2024-12-14 18:51:46.586115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fa1c0 is same with the state(6) to be set 00:31:31.016 [2024-12-14 18:51:46.586142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa1c0 (9): Bad file descriptor 00:31:31.016 [2024-12-14 18:51:46.586161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:31.016 [2024-12-14 18:51:46.586172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:31.016 [2024-12-14 18:51:46.586183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:31.016 [2024-12-14 18:51:46.586213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:31.016 [2024-12-14 18:51:46.586225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:31.016 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:31:31.016 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:31.016 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:31:31.289 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:31:31.289 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:31:31.289 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:31:31.289 18:51:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:31:31.562 18:51:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:31:31.562 18:51:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:31:32.757 3102.75 IOPS, 12.12 MiB/s [2024-12-14T18:51:48.769Z] 2482.20 IOPS, 9.70 MiB/s [2024-12-14T18:51:48.769Z] [2024-12-14 18:51:48.586493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.016 [2024-12-14 18:51:48.586627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16fa1c0 with addr=10.0.0.3, port=4420 00:31:33.016 [2024-12-14 18:51:48.586646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fa1c0 is same with the state(6) to be set 00:31:33.016 [2024-12-14 18:51:48.586674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa1c0 (9): Bad file descriptor 00:31:33.016 [2024-12-14 18:51:48.586709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:33.016 [2024-12-14 18:51:48.586718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:33.016 [2024-12-14 18:51:48.586728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:33.016 [2024-12-14 18:51:48.586772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.016 [2024-12-14 18:51:48.586785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:34.889 2068.50 IOPS, 8.08 MiB/s [2024-12-14T18:51:50.642Z] 1773.00 IOPS, 6.93 MiB/s [2024-12-14T18:51:50.642Z] [2024-12-14 18:51:50.586837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:34.889 [2024-12-14 18:51:50.586927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:34.889 [2024-12-14 18:51:50.586954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:34.889 [2024-12-14 18:51:50.586965] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:34.889 [2024-12-14 18:51:50.587011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.094 1551.38 IOPS, 6.06 MiB/s 00:31:36.094 Latency(us) 00:31:36.094 [2024-12-14T18:51:51.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.094 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:36.094 Verification LBA range: start 0x0 length 0x4000 00:31:36.094 NVMe0n1 : 8.24 1506.11 5.88 15.53 0.00 84202.71 2308.65 7046430.72 00:31:36.094 [2024-12-14T18:51:51.847Z] =================================================================================================================== 00:31:36.094 [2024-12-14T18:51:51.847Z] Total : 1506.11 5.88 15.53 0.00 84202.71 2308.65 7046430.72 00:31:36.094 { 00:31:36.094 "results": [ 00:31:36.094 { 00:31:36.094 "job": "NVMe0n1", 00:31:36.094 "core_mask": "0x4", 00:31:36.094 "workload": "verify", 00:31:36.094 "status": "finished", 00:31:36.094 "verify_range": { 00:31:36.094 "start": 0, 00:31:36.094 "length": 16384 00:31:36.094 }, 00:31:36.094 "queue_depth": 128, 00:31:36.094 "io_size": 4096, 00:31:36.094 "runtime": 8.240446, 00:31:36.094 "iops": 1506.107800475848, 00:31:36.094 "mibps": 5.883233595608782, 00:31:36.094 "io_failed": 128, 00:31:36.094 "io_timeout": 0, 00:31:36.094 "avg_latency_us": 84202.70520398176, 00:31:36.094 "min_latency_us": 2308.6545454545453, 00:31:36.094 "max_latency_us": 7046430.72 00:31:36.094 } 00:31:36.094 ], 00:31:36.094 "core_count": 1 00:31:36.094 } 00:31:36.661 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:31:36.661 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:36.661 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:31:36.920 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:31:36.920 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:31:36.920 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:31:36.920 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 117680 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 117655 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 117655 ']' 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 117655 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117655 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117655' 00:31:37.179 killing process with pid 117655 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 117655 00:31:37.179 Received shutdown signal, test time was about 9.555915 seconds 00:31:37.179 00:31:37.179 Latency(us) 00:31:37.179 [2024-12-14T18:51:52.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.179 [2024-12-14T18:51:52.932Z] =================================================================================================================== 00:31:37.179 [2024-12-14T18:51:52.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:37.179 18:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 117655 00:31:37.438 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:37.698 [2024-12-14 18:51:53.323150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=117833 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 117833 /var/tmp/bdevperf.sock 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 117833 ']' 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:37.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:37.698 18:51:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:37.698 [2024-12-14 18:51:53.410139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:37.698 [2024-12-14 18:51:53.410241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117833 ] 00:31:37.957 [2024-12-14 18:51:53.546528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.957 [2024-12-14 18:51:53.628826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.899 18:51:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.899 18:51:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:31:38.899 18:51:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:39.158 18:51:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:31:39.416 NVMe0n1 00:31:39.416 18:51:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.416 18:51:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=117886 00:31:39.416 18:51:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:31:39.674 Running I/O for 10 seconds... 00:31:40.610 18:51:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:40.610 10332.00 IOPS, 40.36 MiB/s [2024-12-14T18:51:56.363Z] [2024-12-14 18:51:56.360484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc960 is same with the state(6) to be set 00:31:40.610 [2024-12-14 18:51:56.360550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc960 is same with the state(6) to be set 00:31:40.610 [2024-12-14 18:51:56.360564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc960 is same with the state(6) to be set 00:31:40.610 [2024-12-14 18:51:56.360978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.610 [2024-12-14 18:51:56.361019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.610 [2024-12-14 18:51:56.361040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.611 [2024-12-14 18:51:56.361252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.611 [2024-12-14 18:51:56.361656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.611 [2024-12-14 18:51:56.361679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.361986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.361997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.362006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.362016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.362025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.362047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.872 [2024-12-14 18:51:56.362056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.872 [2024-12-14 18:51:56.362067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.873 [2024-12-14 18:51:56.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.873 [2024-12-14 18:51:56.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.873 [2024-12-14 18:51:56.363005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.874 [2024-12-14 18:51:56.363731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.874 [2024-12-14 18:51:56.363750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.874 [2024-12-14 18:51:56.363769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:40.874 [2024-12-14 18:51:56.363788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf6e0 is same with the state(6) to be set 00:31:40.874 [2024-12-14 18:51:56.363808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:40.874 [2024-12-14 18:51:56.363815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:40.874 [2024-12-14 18:51:56.363823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99032 len:8 PRP1 0x0 PRP2 0x0 00:31:40.874 [2024-12-14 18:51:56.363831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.874 [2024-12-14 18:51:56.363883] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbf6e0 was disconnected and freed. reset controller. 00:31:40.874 [2024-12-14 18:51:56.364120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.874 [2024-12-14 18:51:56.364204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:40.874 [2024-12-14 18:51:56.364310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.875 [2024-12-14 18:51:56.364330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:40.875 [2024-12-14 18:51:56.364340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:40.875 [2024-12-14 18:51:56.364356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:40.875 [2024-12-14 18:51:56.364372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:40.875 [2024-12-14 18:51:56.364381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:40.875 [2024-12-14 18:51:56.364391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:40.875 [2024-12-14 18:51:56.364410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:40.875 [2024-12-14 18:51:56.364420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:40.875 18:51:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:31:41.700 6126.00 IOPS, 23.93 MiB/s [2024-12-14T18:51:57.453Z] [2024-12-14 18:51:57.364531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.700 [2024-12-14 18:51:57.364611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:41.700 [2024-12-14 18:51:57.364627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:41.700 [2024-12-14 18:51:57.364648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:41.700 [2024-12-14 18:51:57.364664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:41.700 [2024-12-14 18:51:57.364674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:41.700 [2024-12-14 18:51:57.364683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:41.700 [2024-12-14 18:51:57.364706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:41.700 [2024-12-14 18:51:57.364717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:41.700 18:51:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:42.284 [2024-12-14 18:51:57.719624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:42.284 18:51:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 117886 00:31:42.805 4084.00 IOPS, 15.95 MiB/s [2024-12-14T18:51:58.558Z] [2024-12-14 18:51:58.379330] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:44.675 3063.00 IOPS, 11.96 MiB/s [2024-12-14T18:52:01.364Z] 4041.40 IOPS, 15.79 MiB/s [2024-12-14T18:52:02.299Z] 5054.33 IOPS, 19.74 MiB/s [2024-12-14T18:52:03.233Z] 5770.29 IOPS, 22.54 MiB/s [2024-12-14T18:52:04.609Z] 6320.62 IOPS, 24.69 MiB/s [2024-12-14T18:52:05.545Z] 6748.22 IOPS, 26.36 MiB/s [2024-12-14T18:52:05.545Z] 7019.90 IOPS, 27.42 MiB/s 00:31:49.792 Latency(us) 00:31:49.792 [2024-12-14T18:52:05.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.792 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:49.792 Verification LBA range: start 0x0 length 0x4000 00:31:49.792 NVMe0n1 : 10.01 7026.15 27.45 0.00 0.00 18184.59 1779.90 3019898.88 00:31:49.792 [2024-12-14T18:52:05.545Z] =================================================================================================================== 00:31:49.792 [2024-12-14T18:52:05.545Z] Total : 7026.15 27.45 0.00 0.00 18184.59 1779.90 3019898.88 00:31:49.792 { 00:31:49.792 "results": [ 00:31:49.792 { 00:31:49.792 "job": "NVMe0n1", 00:31:49.792 "core_mask": "0x4", 00:31:49.792 "workload": "verify", 00:31:49.792 "status": "finished", 00:31:49.792 "verify_range": { 00:31:49.792 "start": 0, 00:31:49.792 "length": 16384 00:31:49.792 }, 00:31:49.792 "queue_depth": 128, 00:31:49.792 "io_size": 4096, 00:31:49.792 "runtime": 10.009321, 00:31:49.792 "iops": 7026.150924723066, 00:31:49.792 "mibps": 27.445902049699477, 00:31:49.792 "io_failed": 0, 00:31:49.792 "io_timeout": 0, 00:31:49.792 "avg_latency_us": 18184.590209941354, 00:31:49.792 "min_latency_us": 1779.898181818182, 00:31:49.792 "max_latency_us": 3019898.88 00:31:49.792 } 00:31:49.792 ], 00:31:49.792 "core_count": 1 00:31:49.792 } 00:31:49.792 18:52:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=117998 00:31:49.792 18:52:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:49.792 18:52:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:31:49.792 Running I/O for 10 seconds... 00:31:50.728 18:52:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:50.990 9859.00 IOPS, 38.51 MiB/s [2024-12-14T18:52:06.743Z] [2024-12-14 18:52:06.491592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.491996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.492050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81db0 is same with the state(6) to be set 00:31:50.990 [2024-12-14 18:52:06.493768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.990 [2024-12-14 18:52:06.493824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.990 [2024-12-14 18:52:06.493859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.990 [2024-12-14 18:52:06.493881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.990 [2024-12-14 18:52:06.493901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.990 [2024-12-14 18:52:06.493922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.493942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.493962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.493972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.493992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.494003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.494027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.494055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.494064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.990 [2024-12-14 18:52:06.494075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.990 [2024-12-14 18:52:06.494084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.991 [2024-12-14 18:52:06.494917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.991 [2024-12-14 18:52:06.494928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.494937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.494948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.494957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.494968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.494980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.494991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.992 [2024-12-14 18:52:06.495482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.992 [2024-12-14 18:52:06.495810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.992 [2024-12-14 18:52:06.495821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.495981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.495990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:50.993 [2024-12-14 18:52:06.496154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91856 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91864 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91872 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91880 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91888 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91896 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91904 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91912 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91920 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.993 [2024-12-14 18:52:06.496587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.993 [2024-12-14 18:52:06.496595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:31:50.993 [2024-12-14 18:52:06.496618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.993 [2024-12-14 18:52:06.496628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.496635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.496648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.496657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.496666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.496673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.496681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91968 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.496701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.496710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.496717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.496735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.496750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.496757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.496765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.496774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.496783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.506630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.506679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91992 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.506692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.506705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:50.994 [2024-12-14 18:52:06.506713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:50.994 [2024-12-14 18:52:06.506721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:31:50.994 [2024-12-14 18:52:06.506730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.506785] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbede0 was disconnected and freed. reset controller. 00:31:50.994 [2024-12-14 18:52:06.506916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.994 [2024-12-14 18:52:06.506936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.506948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.994 [2024-12-14 18:52:06.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.506968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.994 [2024-12-14 18:52:06.506976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.506986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.994 [2024-12-14 18:52:06.506995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.994 [2024-12-14 18:52:06.507005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:50.994 [2024-12-14 18:52:06.507245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.994 [2024-12-14 18:52:06.507277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:50.994 [2024-12-14 18:52:06.507379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.994 [2024-12-14 18:52:06.507411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:50.994 [2024-12-14 18:52:06.507423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:50.994 [2024-12-14 18:52:06.507441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:50.994 [2024-12-14 18:52:06.507467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.994 [2024-12-14 18:52:06.507476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.994 [2024-12-14 18:52:06.507486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.994 [2024-12-14 18:52:06.507506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.994 [2024-12-14 18:52:06.507517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.994 18:52:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:31:51.929 5686.50 IOPS, 22.21 MiB/s [2024-12-14T18:52:07.682Z] [2024-12-14 18:52:07.507656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.929 [2024-12-14 18:52:07.507721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:51.929 [2024-12-14 18:52:07.507739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:51.929 [2024-12-14 18:52:07.507766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:51.929 [2024-12-14 18:52:07.507785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:51.929 [2024-12-14 18:52:07.507795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:51.929 [2024-12-14 18:52:07.507806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:51.929 [2024-12-14 18:52:07.507837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:51.929 [2024-12-14 18:52:07.507848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:52.865 3791.00 IOPS, 14.81 MiB/s [2024-12-14T18:52:08.618Z] [2024-12-14 18:52:08.507961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.865 [2024-12-14 18:52:08.508042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:52.865 [2024-12-14 18:52:08.508057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:52.865 [2024-12-14 18:52:08.508080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:52.865 [2024-12-14 18:52:08.508097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:52.865 [2024-12-14 18:52:08.508105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:52.865 [2024-12-14 18:52:08.508115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:52.865 [2024-12-14 18:52:08.508140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:52.865 [2024-12-14 18:52:08.508151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:53.800 2843.25 IOPS, 11.11 MiB/s [2024-12-14T18:52:09.553Z] [2024-12-14 18:52:09.511484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.800 [2024-12-14 18:52:09.511583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b471c0 with addr=10.0.0.3, port=4420 00:31:53.800 [2024-12-14 18:52:09.511600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b471c0 is same with the state(6) to be set 00:31:53.800 [2024-12-14 18:52:09.511897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b471c0 (9): Bad file descriptor 00:31:53.800 [2024-12-14 18:52:09.512150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:53.800 [2024-12-14 18:52:09.512170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:53.800 [2024-12-14 18:52:09.512183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:53.800 [2024-12-14 18:52:09.515956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:53.800 [2024-12-14 18:52:09.516013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:53.800 18:52:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:54.366 [2024-12-14 18:52:09.816324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:54.366 18:52:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 117998 00:31:54.910 2274.60 IOPS, 8.89 MiB/s [2024-12-14T18:52:10.663Z] [2024-12-14 18:52:10.551767] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:56.782 3292.00 IOPS, 12.86 MiB/s [2024-12-14T18:52:13.470Z] 4353.86 IOPS, 17.01 MiB/s [2024-12-14T18:52:14.405Z] 5143.50 IOPS, 20.09 MiB/s [2024-12-14T18:52:15.778Z] 5755.67 IOPS, 22.48 MiB/s [2024-12-14T18:52:15.778Z] 6241.20 IOPS, 24.38 MiB/s 00:32:00.025 Latency(us) 00:32:00.025 [2024-12-14T18:52:15.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.025 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:00.025 Verification LBA range: start 0x0 length 0x4000 00:32:00.025 NVMe0n1 : 10.01 6248.56 24.41 3911.49 0.00 12562.94 729.83 3035150.89 00:32:00.025 [2024-12-14T18:52:15.778Z] =================================================================================================================== 00:32:00.025 [2024-12-14T18:52:15.778Z] Total : 6248.56 24.41 3911.49 0.00 12562.94 0.00 3035150.89 00:32:00.025 { 00:32:00.025 "results": [ 00:32:00.025 { 00:32:00.025 "job": "NVMe0n1", 00:32:00.025 "core_mask": "0x4", 00:32:00.025 "workload": "verify", 00:32:00.025 "status": "finished", 00:32:00.025 "verify_range": { 00:32:00.025 "start": 0, 00:32:00.025 "length": 16384 00:32:00.025 }, 00:32:00.025 "queue_depth": 128, 00:32:00.025 "io_size": 4096, 00:32:00.025 "runtime": 10.008712, 00:32:00.025 "iops": 6248.556257788215, 00:32:00.025 "mibps": 24.408422881985214, 00:32:00.025 "io_failed": 39149, 00:32:00.025 "io_timeout": 0, 00:32:00.025 "avg_latency_us": 12562.937828512782, 00:32:00.025 "min_latency_us": 729.8327272727273, 00:32:00.025 "max_latency_us": 3035150.8945454545 00:32:00.025 } 00:32:00.025 ], 00:32:00.025 "core_count": 1 00:32:00.025 } 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 117833 ']' 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:00.025 killing process with pid 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117833' 00:32:00.025 Received shutdown signal, test time was about 10.000000 seconds 00:32:00.025 00:32:00.025 Latency(us) 00:32:00.025 [2024-12-14T18:52:15.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.025 [2024-12-14T18:52:15.778Z] =================================================================================================================== 00:32:00.025 [2024-12-14T18:52:15.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 117833 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=118119 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 118119 /var/tmp/bdevperf.sock 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 118119 ']' 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.025 18:52:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:00.025 [2024-12-14 18:52:15.675949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:00.025 [2024-12-14 18:52:15.676080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118119 ] 00:32:00.283 [2024-12-14 18:52:15.818014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.283 [2024-12-14 18:52:15.897074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=118147 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 118119 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:32:01.217 18:52:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:32:01.784 NVMe0n1 00:32:01.784 18:52:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=118202 00:32:01.784 18:52:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.784 18:52:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:32:01.784 Running I/O for 10 seconds... 00:32:02.717 18:52:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:02.979 20395.00 IOPS, 79.67 MiB/s [2024-12-14T18:52:18.732Z] [2024-12-14 18:52:18.556129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd857f0 is same with the state(6) to be set 00:32:02.979 [2024-12-14 18:52:18.556769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.556984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.556993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.979 [2024-12-14 18:52:18.557141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.979 [2024-12-14 18:52:18.557150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.980 [2024-12-14 18:52:18.557919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.980 [2024-12-14 18:52:18.557929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.557938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.557948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.557957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.557967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.557976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.981 [2024-12-14 18:52:18.558705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-12-14 18:52:18.558714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.558981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.558992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-12-14 18:52:18.559449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95b6e0 is same with the state(6) to be set 00:32:02.982 [2024-12-14 18:52:18.559470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:02.982 [2024-12-14 18:52:18.559477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:02.982 [2024-12-14 18:52:18.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:32:02.982 [2024-12-14 18:52:18.559494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.982 [2024-12-14 18:52:18.559562] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x95b6e0 was disconnected and freed. reset controller. 00:32:02.982 [2024-12-14 18:52:18.559846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:02.982 [2024-12-14 18:52:18.559931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e31c0 (9): Bad file descriptor 00:32:02.982 [2024-12-14 18:52:18.560064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.982 [2024-12-14 18:52:18.560084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e31c0 with addr=10.0.0.3, port=4420 00:32:02.982 [2024-12-14 18:52:18.560094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e31c0 is same with the state(6) to be set 00:32:02.982 [2024-12-14 18:52:18.560110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e31c0 (9): Bad file descriptor 00:32:02.982 [2024-12-14 18:52:18.560152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:02.982 [2024-12-14 18:52:18.560163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:02.982 [2024-12-14 18:52:18.560173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:02.982 [2024-12-14 18:52:18.560192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:02.982 [2024-12-14 18:52:18.560202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:02.982 18:52:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 118202 00:32:04.851 12144.00 IOPS, 47.44 MiB/s [2024-12-14T18:52:20.604Z] 8096.00 IOPS, 31.62 MiB/s [2024-12-14T18:52:20.604Z] [2024-12-14 18:52:20.560366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.851 [2024-12-14 18:52:20.560414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e31c0 with addr=10.0.0.3, port=4420 00:32:04.851 [2024-12-14 18:52:20.560429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e31c0 is same with the state(6) to be set 00:32:04.851 [2024-12-14 18:52:20.560453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e31c0 (9): Bad file descriptor 00:32:04.851 [2024-12-14 18:52:20.560470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.851 [2024-12-14 18:52:20.560479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.852 [2024-12-14 18:52:20.560489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.852 [2024-12-14 18:52:20.560514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.852 [2024-12-14 18:52:20.560524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.720 6072.00 IOPS, 23.72 MiB/s [2024-12-14T18:52:22.744Z] 4857.60 IOPS, 18.98 MiB/s [2024-12-14T18:52:22.745Z] [2024-12-14 18:52:22.560680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.992 [2024-12-14 18:52:22.560903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e31c0 with addr=10.0.0.3, port=4420 00:32:06.992 [2024-12-14 18:52:22.560928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e31c0 is same with the state(6) to be set 00:32:06.992 [2024-12-14 18:52:22.560959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e31c0 (9): Bad file descriptor 00:32:06.992 [2024-12-14 18:52:22.560979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.992 [2024-12-14 18:52:22.560988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.992 [2024-12-14 18:52:22.561000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.992 [2024-12-14 18:52:22.561028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.992 [2024-12-14 18:52:22.561039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.876 4048.00 IOPS, 15.81 MiB/s [2024-12-14T18:52:24.629Z] 3469.71 IOPS, 13.55 MiB/s [2024-12-14T18:52:24.629Z] [2024-12-14 18:52:24.561104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.876 [2024-12-14 18:52:24.561169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.876 [2024-12-14 18:52:24.561179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.876 [2024-12-14 18:52:24.561188] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:08.876 [2024-12-14 18:52:24.561212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:09.815 3036.00 IOPS, 11.86 MiB/s 00:32:09.815 Latency(us) 00:32:09.815 [2024-12-14T18:52:25.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.815 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:32:09.815 NVMe0n1 : 8.20 2963.74 11.58 15.62 0.00 42927.16 1936.29 7015926.69 00:32:09.815 [2024-12-14T18:52:25.568Z] =================================================================================================================== 00:32:09.815 [2024-12-14T18:52:25.568Z] Total : 2963.74 11.58 15.62 0.00 42927.16 1936.29 7015926.69 00:32:09.815 { 00:32:09.815 "results": [ 00:32:09.815 { 00:32:09.815 "job": "NVMe0n1", 00:32:09.815 "core_mask": "0x4", 00:32:09.815 "workload": "randread", 00:32:09.815 "status": "finished", 00:32:09.815 "queue_depth": 128, 00:32:09.815 "io_size": 4096, 00:32:09.815 "runtime": 8.19504, 00:32:09.815 "iops": 2963.7439231535172, 00:32:09.815 "mibps": 11.577124699818427, 00:32:09.815 "io_failed": 128, 00:32:09.815 "io_timeout": 0, 00:32:09.815 "avg_latency_us": 42927.15518169903, 00:32:09.815 "min_latency_us": 1936.290909090909, 00:32:09.815 "max_latency_us": 7015926.69090909 00:32:09.815 } 00:32:09.815 ], 00:32:09.815 "core_count": 1 00:32:09.815 } 00:32:10.073 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:10.074 Attaching 5 probes... 00:32:10.074 1314.921482: reset bdev controller NVMe0 00:32:10.074 1315.075261: reconnect bdev controller NVMe0 00:32:10.074 3315.317612: reconnect delay bdev controller NVMe0 00:32:10.074 3315.351057: reconnect bdev controller NVMe0 00:32:10.074 5315.640689: reconnect delay bdev controller NVMe0 00:32:10.074 5315.673875: reconnect bdev controller NVMe0 00:32:10.074 7316.157641: reconnect delay bdev controller NVMe0 00:32:10.074 7316.191103: reconnect bdev controller NVMe0 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 118147 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 118119 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 118119 ']' 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 118119 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118119 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:10.074 killing process with pid 118119 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118119' 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 118119 00:32:10.074 Received shutdown signal, test time was about 8.264702 seconds 00:32:10.074 00:32:10.074 Latency(us) 00:32:10.074 [2024-12-14T18:52:25.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.074 [2024-12-14T18:52:25.827Z] =================================================================================================================== 00:32:10.074 [2024-12-14T18:52:25.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.074 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 118119 00:32:10.332 18:52:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:10.590 rmmod nvme_tcp 00:32:10.590 rmmod nvme_fabrics 00:32:10.590 rmmod nvme_keyring 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 117558 ']' 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 117558 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 117558 ']' 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 117558 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117558 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:10.590 killing process with pid 117558 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117558' 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 117558 00:32:10.590 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 117558 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:10.849 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:32:11.107 00:32:11.107 real 0m47.484s 00:32:11.107 user 2m18.875s 00:32:11.107 sys 0m5.509s 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.107 18:52:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:11.107 ************************************ 00:32:11.108 END TEST nvmf_timeout 00:32:11.108 ************************************ 00:32:11.437 18:52:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:32:11.437 18:52:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:11.437 ************************************ 00:32:11.437 END TEST nvmf_host 00:32:11.437 ************************************ 00:32:11.437 00:32:11.437 real 6m37.683s 00:32:11.437 user 18m18.688s 00:32:11.437 sys 1m16.939s 00:32:11.437 18:52:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.437 18:52:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.437 18:52:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:11.437 18:52:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:11.438 18:52:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:11.438 18:52:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:11.438 18:52:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.438 18:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.438 ************************************ 00:32:11.438 START TEST nvmf_target_core_interrupt_mode 00:32:11.438 ************************************ 00:32:11.438 18:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:11.438 * Looking for test storage... 00:32:11.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.438 --rc genhtml_branch_coverage=1 00:32:11.438 --rc genhtml_function_coverage=1 00:32:11.438 --rc genhtml_legend=1 00:32:11.438 --rc geninfo_all_blocks=1 00:32:11.438 --rc geninfo_unexecuted_blocks=1 00:32:11.438 00:32:11.438 ' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.438 --rc genhtml_branch_coverage=1 00:32:11.438 --rc genhtml_function_coverage=1 00:32:11.438 --rc genhtml_legend=1 00:32:11.438 --rc geninfo_all_blocks=1 00:32:11.438 --rc geninfo_unexecuted_blocks=1 00:32:11.438 00:32:11.438 ' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.438 --rc genhtml_branch_coverage=1 00:32:11.438 --rc genhtml_function_coverage=1 00:32:11.438 --rc genhtml_legend=1 00:32:11.438 --rc geninfo_all_blocks=1 00:32:11.438 --rc geninfo_unexecuted_blocks=1 00:32:11.438 00:32:11.438 ' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.438 --rc genhtml_branch_coverage=1 00:32:11.438 --rc genhtml_function_coverage=1 00:32:11.438 --rc genhtml_legend=1 00:32:11.438 --rc geninfo_all_blocks=1 00:32:11.438 --rc geninfo_unexecuted_blocks=1 00:32:11.438 00:32:11.438 ' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.438 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:11.439 ************************************ 00:32:11.439 START TEST nvmf_abort 00:32:11.439 ************************************ 00:32:11.439 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:11.698 * Looking for test storage... 00:32:11.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:11.698 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.699 --rc genhtml_branch_coverage=1 00:32:11.699 --rc genhtml_function_coverage=1 00:32:11.699 --rc genhtml_legend=1 00:32:11.699 --rc geninfo_all_blocks=1 00:32:11.699 --rc geninfo_unexecuted_blocks=1 00:32:11.699 00:32:11.699 ' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.699 --rc genhtml_branch_coverage=1 00:32:11.699 --rc genhtml_function_coverage=1 00:32:11.699 --rc genhtml_legend=1 00:32:11.699 --rc geninfo_all_blocks=1 00:32:11.699 --rc geninfo_unexecuted_blocks=1 00:32:11.699 00:32:11.699 ' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.699 --rc genhtml_branch_coverage=1 00:32:11.699 --rc genhtml_function_coverage=1 00:32:11.699 --rc genhtml_legend=1 00:32:11.699 --rc geninfo_all_blocks=1 00:32:11.699 --rc geninfo_unexecuted_blocks=1 00:32:11.699 00:32:11.699 ' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.699 --rc genhtml_branch_coverage=1 00:32:11.699 --rc genhtml_function_coverage=1 00:32:11.699 --rc genhtml_legend=1 00:32:11.699 --rc geninfo_all_blocks=1 00:32:11.699 --rc geninfo_unexecuted_blocks=1 00:32:11.699 00:32:11.699 ' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@456 -- # nvmf_veth_init 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:11.699 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:11.700 Cannot find device "nvmf_init_br" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:11.700 Cannot find device "nvmf_init_br2" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:11.700 Cannot find device "nvmf_tgt_br" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:11.700 Cannot find device "nvmf_tgt_br2" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:11.700 Cannot find device "nvmf_init_br" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:11.700 Cannot find device "nvmf_init_br2" 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:32:11.700 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:11.958 Cannot find device "nvmf_tgt_br" 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:11.958 Cannot find device "nvmf_tgt_br2" 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:11.958 Cannot find device "nvmf_br" 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:11.958 Cannot find device "nvmf_init_if" 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:32:11.958 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:11.958 Cannot find device "nvmf_init_if2" 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:11.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:11.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:11.959 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:12.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:12.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:32:12.218 00:32:12.218 --- 10.0.0.3 ping statistics --- 00:32:12.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.218 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:12.218 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:12.218 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:32:12.218 00:32:12.218 --- 10.0.0.4 ping statistics --- 00:32:12.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.218 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:12.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:32:12.218 00:32:12.218 --- 10.0.0.1 ping statistics --- 00:32:12.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.218 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:12.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:32:12.218 00:32:12.218 --- 10.0.0.2 ping statistics --- 00:32:12.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.218 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@457 -- # return 0 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=118616 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 118616 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 118616 ']' 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.218 18:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.218 [2024-12-14 18:52:27.875559] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.218 [2024-12-14 18:52:27.876852] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:12.218 [2024-12-14 18:52:27.876923] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.476 [2024-12-14 18:52:28.012706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:12.477 [2024-12-14 18:52:28.083538] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.477 [2024-12-14 18:52:28.083643] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.477 [2024-12-14 18:52:28.083655] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.477 [2024-12-14 18:52:28.083663] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.477 [2024-12-14 18:52:28.083670] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.477 [2024-12-14 18:52:28.083984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.477 [2024-12-14 18:52:28.084263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.477 [2024-12-14 18:52:28.084267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.477 [2024-12-14 18:52:28.202863] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.477 [2024-12-14 18:52:28.202997] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.477 [2024-12-14 18:52:28.203793] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:12.477 [2024-12-14 18:52:28.213867] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 [2024-12-14 18:52:28.941821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 Malloc0 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 Delay0 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 [2024-12-14 18:52:29.013926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.411 18:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:13.669 [2024-12-14 18:52:29.182290] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:15.571 Initializing NVMe Controllers 00:32:15.571 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:32:15.571 controller IO queue size 128 less than required 00:32:15.571 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:15.571 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:15.571 Initialization complete. Launching workers. 00:32:15.571 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30010 00:32:15.571 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30067, failed to submit 66 00:32:15.571 success 30010, unsuccessful 57, failed 0 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:15.571 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:15.572 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.572 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:15.572 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.572 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.572 rmmod nvme_tcp 00:32:15.572 rmmod nvme_fabrics 00:32:15.572 rmmod nvme_keyring 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 118616 ']' 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 118616 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 118616 ']' 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 118616 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118616 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:15.830 killing process with pid 118616 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118616' 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 118616 00:32:15.830 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 118616 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.089 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:16.090 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:32:16.348 00:32:16.348 real 0m4.812s 00:32:16.348 user 0m9.350s 00:32:16.348 sys 0m1.484s 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:16.348 18:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.348 ************************************ 00:32:16.348 END TEST nvmf_abort 00:32:16.348 ************************************ 00:32:16.348 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:16.348 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:16.348 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.348 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:16.348 ************************************ 00:32:16.348 START TEST nvmf_ns_hotplug_stress 00:32:16.348 ************************************ 00:32:16.348 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:16.608 * Looking for test storage... 00:32:16.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.608 --rc genhtml_branch_coverage=1 00:32:16.608 --rc genhtml_function_coverage=1 00:32:16.608 --rc genhtml_legend=1 00:32:16.608 --rc geninfo_all_blocks=1 00:32:16.608 --rc geninfo_unexecuted_blocks=1 00:32:16.608 00:32:16.608 ' 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.608 --rc genhtml_branch_coverage=1 00:32:16.608 --rc genhtml_function_coverage=1 00:32:16.608 --rc genhtml_legend=1 00:32:16.608 --rc geninfo_all_blocks=1 00:32:16.608 --rc geninfo_unexecuted_blocks=1 00:32:16.608 00:32:16.608 ' 00:32:16.608 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.609 --rc genhtml_branch_coverage=1 00:32:16.609 --rc genhtml_function_coverage=1 00:32:16.609 --rc genhtml_legend=1 00:32:16.609 --rc geninfo_all_blocks=1 00:32:16.609 --rc geninfo_unexecuted_blocks=1 00:32:16.609 00:32:16.609 ' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.609 --rc genhtml_branch_coverage=1 00:32:16.609 --rc genhtml_function_coverage=1 00:32:16.609 --rc genhtml_legend=1 00:32:16.609 --rc geninfo_all_blocks=1 00:32:16.609 --rc geninfo_unexecuted_blocks=1 00:32:16.609 00:32:16.609 ' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:16.609 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:16.610 Cannot find device "nvmf_init_br" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:16.610 Cannot find device "nvmf_init_br2" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:16.610 Cannot find device "nvmf_tgt_br" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:16.610 Cannot find device "nvmf_tgt_br2" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:16.610 Cannot find device "nvmf_init_br" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:16.610 Cannot find device "nvmf_init_br2" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:16.610 Cannot find device "nvmf_tgt_br" 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:32:16.610 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:16.869 Cannot find device "nvmf_tgt_br2" 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:16.869 Cannot find device "nvmf_br" 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:16.869 Cannot find device "nvmf_init_if" 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:16.869 Cannot find device "nvmf_init_if2" 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:16.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:16.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:16.869 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:17.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:17.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:32:17.128 00:32:17.128 --- 10.0.0.3 ping statistics --- 00:32:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.128 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:17.128 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:17.128 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:32:17.128 00:32:17.128 --- 10.0.0.4 ping statistics --- 00:32:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.128 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:17.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:32:17.128 00:32:17.128 --- 10.0.0.1 ping statistics --- 00:32:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.128 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:17.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:32:17.128 00:32:17.128 --- 10.0.0.2 ping statistics --- 00:32:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.128 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # return 0 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:17.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=118932 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 118932 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 118932 ']' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.128 18:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:17.128 [2024-12-14 18:52:32.756451] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.128 [2024-12-14 18:52:32.757960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:17.128 [2024-12-14 18:52:32.758201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.386 [2024-12-14 18:52:32.900621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.386 [2024-12-14 18:52:32.996513] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.386 [2024-12-14 18:52:32.997264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.386 [2024-12-14 18:52:32.997522] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.386 [2024-12-14 18:52:32.997614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.386 [2024-12-14 18:52:32.997706] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.386 [2024-12-14 18:52:32.997929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.386 [2024-12-14 18:52:32.998552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.386 [2024-12-14 18:52:32.998566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.386 [2024-12-14 18:52:33.129194] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.386 [2024-12-14 18:52:33.129352] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.386 [2024-12-14 18:52:33.130157] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:17.645 [2024-12-14 18:52:33.140038] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:18.213 18:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:18.472 [2024-12-14 18:52:34.143718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.472 18:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:19.040 18:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:19.040 [2024-12-14 18:52:34.728286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:19.040 18:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:19.299 18:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:19.868 Malloc0 00:32:19.868 18:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:19.868 Delay0 00:32:19.868 18:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.127 18:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:20.385 NULL1 00:32:20.385 18:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:20.644 18:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:20.644 18:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=119059 00:32:20.644 18:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:20.644 18:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.021 Read completed with error (sct=0, sc=11) 00:32:22.021 18:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.280 18:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:22.280 18:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:22.539 true 00:32:22.539 18:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:22.539 18:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.475 18:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.475 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:23.475 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:23.734 true 00:32:23.734 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:23.734 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.301 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.301 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:24.301 18:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:24.560 true 00:32:24.560 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:24.560 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.819 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.078 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:25.078 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:25.337 true 00:32:25.337 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:25.337 18:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.273 18:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.532 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:26.532 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:26.791 true 00:32:26.791 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:26.791 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.050 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.309 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:27.309 18:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:27.568 true 00:32:27.568 18:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:27.568 18:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.827 18:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.086 18:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:28.086 18:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:28.344 true 00:32:28.344 18:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:28.344 18:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.280 18:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.540 18:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:29.540 18:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:29.798 true 00:32:29.799 18:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:29.799 18:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.057 18:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.316 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:30.316 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:30.884 true 00:32:30.884 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:30.884 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.884 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.143 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:31.143 18:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:31.712 true 00:32:31.712 18:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:31.712 18:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.280 18:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.847 18:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:32.847 18:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:32.847 true 00:32:32.847 18:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:32.847 18:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.105 18:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.370 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:33.370 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:33.950 true 00:32:33.950 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:33.950 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.950 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.209 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:34.209 18:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:34.468 true 00:32:34.468 18:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:34.468 18:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.410 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.669 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:35.669 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:35.929 true 00:32:35.929 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:35.929 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.188 18:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.447 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:36.447 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:36.706 true 00:32:36.706 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:36.706 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.966 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.225 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:37.225 18:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:37.485 true 00:32:37.485 18:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:37.485 18:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.423 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.682 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:38.682 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:38.941 true 00:32:38.941 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:38.941 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.201 18:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.461 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:39.461 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:39.721 true 00:32:39.721 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:39.721 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.980 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.239 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:40.239 18:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:40.498 true 00:32:40.498 18:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:40.498 18:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.434 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.692 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:41.692 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:41.951 true 00:32:41.951 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:41.951 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.210 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.469 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:42.469 18:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:42.469 true 00:32:42.729 18:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:42.729 18:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.666 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.666 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:43.666 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:43.925 true 00:32:43.925 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:43.925 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.184 18:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.443 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:44.443 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:44.701 true 00:32:44.701 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:44.701 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.960 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.220 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:45.220 18:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:45.479 true 00:32:45.479 18:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:45.479 18:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.428 18:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:46.699 18:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:46.699 18:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:46.958 true 00:32:46.958 18:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:46.958 18:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.894 18:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.894 18:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:47.894 18:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:48.153 true 00:32:48.153 18:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:48.153 18:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.412 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.671 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:48.671 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:48.930 true 00:32:48.930 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:48.930 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.189 18:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.447 18:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:49.447 18:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:49.704 true 00:32:49.704 18:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:49.704 18:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.081 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.081 Message suppressed 999 times: Initializing NVMe Controllers 00:32:51.081 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.081 Controller IO queue size 128, less than required. 00:32:51.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:51.081 Controller IO queue size 128, less than required. 00:32:51.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:51.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:51.081 Initialization complete. Launching workers. 00:32:51.081 ======================================================== 00:32:51.081 Latency(us) 00:32:51.081 Device Information : IOPS MiB/s Average min max 00:32:51.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 717.93 0.35 79692.05 2973.20 1033680.09 00:32:51.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10105.48 4.93 12666.71 1395.47 515221.53 00:32:51.081 ======================================================== 00:32:51.081 Total : 10823.41 5.28 17112.56 1395.47 1033680.09 00:32:51.081 00:32:51.081 Read completed with error (sct=0, sc=11) 00:32:51.081 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:51.081 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:51.340 true 00:32:51.340 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119059 00:32:51.340 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (119059) - No such process 00:32:51.340 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 119059 00:32:51.340 18:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.340 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:51.599 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:51.599 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:51.599 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:51.599 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:51.599 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:51.859 null0 00:32:51.859 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:51.859 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:51.859 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:52.118 null1 00:32:52.118 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:52.118 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:52.118 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:52.377 null2 00:32:52.377 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:52.377 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:52.377 18:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:52.635 null3 00:32:52.636 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:52.636 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:52.636 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:52.894 null4 00:32:52.894 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:52.894 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:52.894 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:52.894 null5 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:53.153 null6 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:53.153 18:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:53.413 null7 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 120074 120075 120077 120078 120080 120082 120084 120087 00:32:53.413 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:53.672 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.672 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:53.672 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:53.672 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:53.672 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:53.931 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.190 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:54.449 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:54.449 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:54.449 18:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.449 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:54.708 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:54.967 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.226 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.227 18:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:55.485 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:55.744 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:56.003 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.262 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.263 18:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:56.263 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.263 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.263 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:56.521 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.521 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:56.522 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:56.781 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:57.040 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:57.299 18:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.558 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:57.817 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.076 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:58.335 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:58.336 18:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:58.336 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:58.594 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:58.594 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:58.594 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.595 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:58.854 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:59.113 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:59.113 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:59.113 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.114 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.372 18:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.372 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.372 rmmod nvme_tcp 00:32:59.372 rmmod nvme_fabrics 00:32:59.372 rmmod nvme_keyring 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 118932 ']' 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 118932 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 118932 ']' 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 118932 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118932 00:32:59.630 killing process with pid 118932 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118932' 00:32:59.630 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 118932 00:32:59.631 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 118932 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.888 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:33:00.146 00:33:00.146 real 0m43.651s 00:33:00.146 user 3m13.455s 00:33:00.146 sys 0m16.896s 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 ************************************ 00:33:00.146 END TEST nvmf_ns_hotplug_stress 00:33:00.146 ************************************ 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 ************************************ 00:33:00.146 START TEST nvmf_delete_subsystem 00:33:00.146 ************************************ 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:00.146 * Looking for test storage... 00:33:00.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:33:00.146 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:00.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.406 --rc genhtml_branch_coverage=1 00:33:00.406 --rc genhtml_function_coverage=1 00:33:00.406 --rc genhtml_legend=1 00:33:00.406 --rc geninfo_all_blocks=1 00:33:00.406 --rc geninfo_unexecuted_blocks=1 00:33:00.406 00:33:00.406 ' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:00.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.406 --rc genhtml_branch_coverage=1 00:33:00.406 --rc genhtml_function_coverage=1 00:33:00.406 --rc genhtml_legend=1 00:33:00.406 --rc geninfo_all_blocks=1 00:33:00.406 --rc geninfo_unexecuted_blocks=1 00:33:00.406 00:33:00.406 ' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:00.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.406 --rc genhtml_branch_coverage=1 00:33:00.406 --rc genhtml_function_coverage=1 00:33:00.406 --rc genhtml_legend=1 00:33:00.406 --rc geninfo_all_blocks=1 00:33:00.406 --rc geninfo_unexecuted_blocks=1 00:33:00.406 00:33:00.406 ' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:00.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.406 --rc genhtml_branch_coverage=1 00:33:00.406 --rc genhtml_function_coverage=1 00:33:00.406 --rc genhtml_legend=1 00:33:00.406 --rc geninfo_all_blocks=1 00:33:00.406 --rc geninfo_unexecuted_blocks=1 00:33:00.406 00:33:00.406 ' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.406 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:00.407 Cannot find device "nvmf_init_br" 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:00.407 Cannot find device "nvmf_init_br2" 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:00.407 Cannot find device "nvmf_tgt_br" 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:00.407 Cannot find device "nvmf_tgt_br2" 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:33:00.407 18:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:00.407 Cannot find device "nvmf_init_br" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:00.407 Cannot find device "nvmf_init_br2" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:00.407 Cannot find device "nvmf_tgt_br" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:00.407 Cannot find device "nvmf_tgt_br2" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:00.407 Cannot find device "nvmf_br" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:00.407 Cannot find device "nvmf_init_if" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:00.407 Cannot find device "nvmf_init_if2" 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:00.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:00.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:00.407 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:00.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:00.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:33:00.666 00:33:00.666 --- 10.0.0.3 ping statistics --- 00:33:00.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.666 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:00.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:00.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:33:00.666 00:33:00.666 --- 10.0.0.4 ping statistics --- 00:33:00.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.666 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:00.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:33:00.666 00:33:00.666 --- 10.0.0.1 ping statistics --- 00:33:00.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.666 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:00.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:33:00.666 00:33:00.666 --- 10.0.0.2 ping statistics --- 00:33:00.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.666 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.666 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # return 0 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=121451 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 121451 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 121451 ']' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.667 18:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.927 [2024-12-14 18:53:16.430531] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.927 [2024-12-14 18:53:16.431910] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:00.927 [2024-12-14 18:53:16.431975] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.927 [2024-12-14 18:53:16.575688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:00.927 [2024-12-14 18:53:16.666525] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.927 [2024-12-14 18:53:16.666612] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.927 [2024-12-14 18:53:16.666641] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.927 [2024-12-14 18:53:16.666652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.927 [2024-12-14 18:53:16.666662] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.927 [2024-12-14 18:53:16.666777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.927 [2024-12-14 18:53:16.666793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.185 [2024-12-14 18:53:16.796366] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.185 [2024-12-14 18:53:16.796799] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:01.185 [2024-12-14 18:53:16.796861] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.752 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.010 [2024-12-14 18:53:17.511673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.010 [2024-12-14 18:53:17.532020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:02.010 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.011 NULL1 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.011 Delay0 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=121504 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:02.011 18:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:02.011 [2024-12-14 18:53:17.737788] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:03.913 18:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.913 18:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.913 18:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 [2024-12-14 18:53:19.776404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112f390 is same with the state(6) to be set 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 starting I/O failed: -6 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 [2024-12-14 18:53:19.776991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f22f4000c00 is same with the state(6) to be set 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Read completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:04.172 Write completed with error (sct=0, sc=8) 00:33:05.108 [2024-12-14 18:53:20.752965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114cb90 is same with the state(6) to be set 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 [2024-12-14 18:53:20.773721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f22f400d7c0 is same with the state(6) to be set 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 [2024-12-14 18:53:20.774544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f22f400cfe0 is same with the state(6) to be set 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 [2024-12-14 18:53:20.775246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114cf50 is same with the state(6) to be set 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Write completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 Read completed with error (sct=0, sc=8) 00:33:05.108 [2024-12-14 18:53:20.775877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130530 is same with the state(6) to be set 00:33:05.108 Initializing NVMe Controllers 00:33:05.108 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.108 Controller IO queue size 128, less than required. 00:33:05.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.108 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:05.108 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:05.108 Initialization complete. Launching workers. 00:33:05.108 ======================================================== 00:33:05.108 Latency(us) 00:33:05.108 Device Information : IOPS MiB/s Average min max 00:33:05.108 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.68 0.08 899489.64 556.09 1012778.86 00:33:05.108 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.36 0.07 986449.37 689.16 1998892.09 00:33:05.108 ======================================================== 00:33:05.108 Total : 313.04 0.15 939868.72 556.09 1998892.09 00:33:05.108 00:33:05.108 [2024-12-14 18:53:20.776404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114cb90 (9): Bad file descriptor 00:33:05.108 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:05.108 18:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.108 18:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:05.108 18:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 121504 00:33:05.108 18:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 121504 00:33:05.676 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (121504) - No such process 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 121504 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 121504 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 121504 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 [2024-12-14 18:53:21.300141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=121544 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:05.676 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.935 [2024-12-14 18:53:21.478241] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:06.193 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.193 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:06.193 18:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:06.761 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.761 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:06.761 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.368 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.368 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:07.368 18:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.629 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.629 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:07.629 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:08.194 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:08.194 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:08.194 18:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:08.760 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:08.760 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:08.760 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:08.760 Initializing NVMe Controllers 00:33:08.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:08.760 Controller IO queue size 128, less than required. 00:33:08.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:08.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:08.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:08.760 Initialization complete. Launching workers. 00:33:08.760 ======================================================== 00:33:08.760 Latency(us) 00:33:08.760 Device Information : IOPS MiB/s Average min max 00:33:08.760 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003037.57 1000131.50 1010750.43 00:33:08.760 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003626.98 1000182.36 1010385.62 00:33:08.760 ======================================================== 00:33:08.760 Total : 256.00 0.12 1003332.27 1000131.50 1010750.43 00:33:08.760 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 121544 00:33:09.327 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (121544) - No such process 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 121544 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.327 rmmod nvme_tcp 00:33:09.327 rmmod nvme_fabrics 00:33:09.327 rmmod nvme_keyring 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 121451 ']' 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 121451 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 121451 ']' 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 121451 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.327 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121451 00:33:09.328 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:09.328 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:09.328 killing process with pid 121451 00:33:09.328 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121451' 00:33:09.328 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 121451 00:33:09.328 18:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 121451 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:09.585 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:33:09.844 00:33:09.844 real 0m9.738s 00:33:09.844 user 0m24.291s 00:33:09.844 sys 0m2.527s 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:09.844 ************************************ 00:33:09.844 END TEST nvmf_delete_subsystem 00:33:09.844 ************************************ 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.844 ************************************ 00:33:09.844 START TEST nvmf_host_management 00:33:09.844 ************************************ 00:33:09.844 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:10.104 * Looking for test storage... 00:33:10.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:10.104 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.105 --rc genhtml_branch_coverage=1 00:33:10.105 --rc genhtml_function_coverage=1 00:33:10.105 --rc genhtml_legend=1 00:33:10.105 --rc geninfo_all_blocks=1 00:33:10.105 --rc geninfo_unexecuted_blocks=1 00:33:10.105 00:33:10.105 ' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.105 --rc genhtml_branch_coverage=1 00:33:10.105 --rc genhtml_function_coverage=1 00:33:10.105 --rc genhtml_legend=1 00:33:10.105 --rc geninfo_all_blocks=1 00:33:10.105 --rc geninfo_unexecuted_blocks=1 00:33:10.105 00:33:10.105 ' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.105 --rc genhtml_branch_coverage=1 00:33:10.105 --rc genhtml_function_coverage=1 00:33:10.105 --rc genhtml_legend=1 00:33:10.105 --rc geninfo_all_blocks=1 00:33:10.105 --rc geninfo_unexecuted_blocks=1 00:33:10.105 00:33:10.105 ' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.105 --rc genhtml_branch_coverage=1 00:33:10.105 --rc genhtml_function_coverage=1 00:33:10.105 --rc genhtml_legend=1 00:33:10.105 --rc geninfo_all_blocks=1 00:33:10.105 --rc geninfo_unexecuted_blocks=1 00:33:10.105 00:33:10.105 ' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:10.105 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:10.106 Cannot find device "nvmf_init_br" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:10.106 Cannot find device "nvmf_init_br2" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:10.106 Cannot find device "nvmf_tgt_br" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:10.106 Cannot find device "nvmf_tgt_br2" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:10.106 Cannot find device "nvmf_init_br" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:10.106 Cannot find device "nvmf_init_br2" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:10.106 Cannot find device "nvmf_tgt_br" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:10.106 Cannot find device "nvmf_tgt_br2" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:10.106 Cannot find device "nvmf_br" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:10.106 Cannot find device "nvmf_init_if" 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:33:10.106 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:10.364 Cannot find device "nvmf_init_if2" 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:10.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:10.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:10.364 18:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:10.364 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:10.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:10.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:33:10.623 00:33:10.623 --- 10.0.0.3 ping statistics --- 00:33:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.623 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:10.623 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:10.623 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:33:10.623 00:33:10.623 --- 10.0.0.4 ping statistics --- 00:33:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.623 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:10.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:33:10.623 00:33:10.623 --- 10.0.0.1 ping statistics --- 00:33:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.623 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:10.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:33:10.623 00:33:10.623 --- 10.0.0.2 ping statistics --- 00:33:10.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.623 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:10.623 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=121828 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 121828 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 121828 ']' 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.624 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 [2024-12-14 18:53:26.249719] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.624 [2024-12-14 18:53:26.250990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:10.624 [2024-12-14 18:53:26.251061] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.882 [2024-12-14 18:53:26.390969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:10.882 [2024-12-14 18:53:26.466008] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.882 [2024-12-14 18:53:26.466082] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.882 [2024-12-14 18:53:26.466092] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.882 [2024-12-14 18:53:26.466099] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.882 [2024-12-14 18:53:26.466106] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.882 [2024-12-14 18:53:26.466461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:10.882 [2024-12-14 18:53:26.466662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:10.882 [2024-12-14 18:53:26.466755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:10.882 [2024-12-14 18:53:26.466758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.882 [2024-12-14 18:53:26.591346] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.882 [2024-12-14 18:53:26.591853] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:10.882 [2024-12-14 18:53:26.592102] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.882 [2024-12-14 18:53:26.592309] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:10.882 [2024-12-14 18:53:26.592784] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:10.882 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.882 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:10.882 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:10.882 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.882 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 [2024-12-14 18:53:26.684331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 Malloc0 00:33:11.142 [2024-12-14 18:53:26.768368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=121887 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 121887 /var/tmp/bdevperf.sock 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 121887 ']' 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:11.142 { 00:33:11.142 "params": { 00:33:11.142 "name": "Nvme$subsystem", 00:33:11.142 "trtype": "$TEST_TRANSPORT", 00:33:11.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.142 "adrfam": "ipv4", 00:33:11.142 "trsvcid": "$NVMF_PORT", 00:33:11.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.142 "hdgst": ${hdgst:-false}, 00:33:11.142 "ddgst": ${ddgst:-false} 00:33:11.142 }, 00:33:11.142 "method": "bdev_nvme_attach_controller" 00:33:11.142 } 00:33:11.142 EOF 00:33:11.142 )") 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:11.142 18:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:11.142 "params": { 00:33:11.142 "name": "Nvme0", 00:33:11.142 "trtype": "tcp", 00:33:11.142 "traddr": "10.0.0.3", 00:33:11.142 "adrfam": "ipv4", 00:33:11.142 "trsvcid": "4420", 00:33:11.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.142 "hdgst": false, 00:33:11.142 "ddgst": false 00:33:11.142 }, 00:33:11.142 "method": "bdev_nvme_attach_controller" 00:33:11.142 }' 00:33:11.142 [2024-12-14 18:53:26.883292] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:11.142 [2024-12-14 18:53:26.883393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121887 ] 00:33:11.402 [2024-12-14 18:53:27.027366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.402 [2024-12-14 18:53:27.110170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.661 Running I/O for 10 seconds... 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:11.661 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:11.920 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.920 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:33:11.920 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:33:11.920 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=607 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 607 -ge 100 ']' 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.180 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.180 [2024-12-14 18:53:27.758583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:12.180 [2024-12-14 18:53:27.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.180 [2024-12-14 18:53:27.758699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:12.180 [2024-12-14 18:53:27.758708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.180 [2024-12-14 18:53:27.758720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:12.181 [2024-12-14 18:53:27.758730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.758740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:12.181 [2024-12-14 18:53:27.758749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.758759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8d530 is same with the state(6) to be set 00:33:12.181 [2024-12-14 18:53:27.759107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.181 [2024-12-14 18:53:27.759718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.181 [2024-12-14 18:53:27.759727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.759984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.759995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.182 [2024-12-14 18:53:27.760290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.182 [2024-12-14 18:53:27.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:12.183 [2024-12-14 18:53:27.760443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.760539] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ff060 was disconnected and freed. reset controller. 00:33:12.183 [2024-12-14 18:53:27.761460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:12.183 task offset: 90112 on job bdev=Nvme0n1 fails 00:33:12.183 00:33:12.183 Latency(us) 00:33:12.183 [2024-12-14T18:53:27.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.183 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:12.183 Job: Nvme0n1 ended in about 0.44 seconds with error 00:33:12.183 Verification LBA range: start 0x0 length 0x400 00:33:12.183 Nvme0n1 : 0.44 1616.17 101.01 146.92 0.00 35246.80 2308.65 33840.41 00:33:12.183 [2024-12-14T18:53:27.936Z] =================================================================================================================== 00:33:12.183 [2024-12-14T18:53:27.936Z] Total : 1616.17 101.01 146.92 0.00 35246.80 2308.65 33840.41 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.183 [2024-12-14 18:53:27.763483] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:12.183 [2024-12-14 18:53:27.763522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d530 (9): Bad file descriptor 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.183 [2024-12-14 18:53:27.764663] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:12.183 [2024-12-14 18:53:27.764765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:12.183 [2024-12-14 18:53:27.764788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.183 [2024-12-14 18:53:27.764807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:12.183 [2024-12-14 18:53:27.764820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:12.183 [2024-12-14 18:53:27.764829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:12.183 [2024-12-14 18:53:27.764839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe8d530 00:33:12.183 [2024-12-14 18:53:27.764879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d530 (9): Bad file descriptor 00:33:12.183 [2024-12-14 18:53:27.764900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:12.183 [2024-12-14 18:53:27.764910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:12.183 [2024-12-14 18:53:27.764922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:12.183 [2024-12-14 18:53:27.764940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.183 18:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 121887 00:33:13.120 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (121887) - No such process 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:13.120 { 00:33:13.120 "params": { 00:33:13.120 "name": "Nvme$subsystem", 00:33:13.120 "trtype": "$TEST_TRANSPORT", 00:33:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:13.120 "adrfam": "ipv4", 00:33:13.120 "trsvcid": "$NVMF_PORT", 00:33:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:13.120 "hdgst": ${hdgst:-false}, 00:33:13.120 "ddgst": ${ddgst:-false} 00:33:13.120 }, 00:33:13.120 "method": "bdev_nvme_attach_controller" 00:33:13.120 } 00:33:13.120 EOF 00:33:13.120 )") 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:13.120 18:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:13.120 "params": { 00:33:13.120 "name": "Nvme0", 00:33:13.120 "trtype": "tcp", 00:33:13.120 "traddr": "10.0.0.3", 00:33:13.120 "adrfam": "ipv4", 00:33:13.120 "trsvcid": "4420", 00:33:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.120 "hdgst": false, 00:33:13.120 "ddgst": false 00:33:13.120 }, 00:33:13.120 "method": "bdev_nvme_attach_controller" 00:33:13.120 }' 00:33:13.120 [2024-12-14 18:53:28.855821] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:13.120 [2024-12-14 18:53:28.856138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121926 ] 00:33:13.379 [2024-12-14 18:53:28.997207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.379 [2024-12-14 18:53:29.068092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.638 Running I/O for 1 seconds... 00:33:14.574 1664.00 IOPS, 104.00 MiB/s 00:33:14.574 Latency(us) 00:33:14.574 [2024-12-14T18:53:30.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:14.574 Verification LBA range: start 0x0 length 0x400 00:33:14.574 Nvme0n1 : 1.03 1679.22 104.95 0.00 0.00 37459.24 5868.45 33840.41 00:33:14.574 [2024-12-14T18:53:30.327Z] =================================================================================================================== 00:33:14.574 [2024-12-14T18:53:30.327Z] Total : 1679.22 104.95 0.00 0.00 37459.24 5868.45 33840.41 00:33:14.833 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:14.833 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.092 rmmod nvme_tcp 00:33:15.092 rmmod nvme_fabrics 00:33:15.092 rmmod nvme_keyring 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 121828 ']' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 121828 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 121828 ']' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 121828 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121828 00:33:15.092 killing process with pid 121828 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121828' 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 121828 00:33:15.092 18:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 121828 00:33:15.351 [2024-12-14 18:53:31.004022] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:15.351 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:15.609 00:33:15.609 real 0m5.757s 00:33:15.609 user 0m18.551s 00:33:15.609 sys 0m2.281s 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:15.609 ************************************ 00:33:15.609 END TEST nvmf_host_management 00:33:15.609 ************************************ 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.609 ************************************ 00:33:15.609 START TEST nvmf_lvol 00:33:15.609 ************************************ 00:33:15.609 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:15.869 * Looking for test storage... 00:33:15.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:15.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.869 --rc genhtml_branch_coverage=1 00:33:15.869 --rc genhtml_function_coverage=1 00:33:15.869 --rc genhtml_legend=1 00:33:15.869 --rc geninfo_all_blocks=1 00:33:15.869 --rc geninfo_unexecuted_blocks=1 00:33:15.869 00:33:15.869 ' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:15.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.869 --rc genhtml_branch_coverage=1 00:33:15.869 --rc genhtml_function_coverage=1 00:33:15.869 --rc genhtml_legend=1 00:33:15.869 --rc geninfo_all_blocks=1 00:33:15.869 --rc geninfo_unexecuted_blocks=1 00:33:15.869 00:33:15.869 ' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:15.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.869 --rc genhtml_branch_coverage=1 00:33:15.869 --rc genhtml_function_coverage=1 00:33:15.869 --rc genhtml_legend=1 00:33:15.869 --rc geninfo_all_blocks=1 00:33:15.869 --rc geninfo_unexecuted_blocks=1 00:33:15.869 00:33:15.869 ' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:15.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.869 --rc genhtml_branch_coverage=1 00:33:15.869 --rc genhtml_function_coverage=1 00:33:15.869 --rc genhtml_legend=1 00:33:15.869 --rc geninfo_all_blocks=1 00:33:15.869 --rc geninfo_unexecuted_blocks=1 00:33:15.869 00:33:15.869 ' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.869 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:15.870 Cannot find device "nvmf_init_br" 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:15.870 Cannot find device "nvmf_init_br2" 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:15.870 Cannot find device "nvmf_tgt_br" 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:15.870 Cannot find device "nvmf_tgt_br2" 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:15.870 Cannot find device "nvmf_init_br" 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:33:15.870 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:16.129 Cannot find device "nvmf_init_br2" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:16.129 Cannot find device "nvmf_tgt_br" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:16.129 Cannot find device "nvmf_tgt_br2" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:16.129 Cannot find device "nvmf_br" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:16.129 Cannot find device "nvmf_init_if" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:16.129 Cannot find device "nvmf_init_if2" 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:16.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:16.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:16.129 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:16.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:16.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:33:16.388 00:33:16.388 --- 10.0.0.3 ping statistics --- 00:33:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.388 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:16.388 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:16.388 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:33:16.388 00:33:16.388 --- 10.0.0.4 ping statistics --- 00:33:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.388 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:16.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:33:16.388 00:33:16.388 --- 10.0.0.1 ping statistics --- 00:33:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.388 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:16.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:33:16.388 00:33:16.388 --- 10.0.0.2 ping statistics --- 00:33:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.388 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=122197 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 122197 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 122197 ']' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.388 18:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.388 [2024-12-14 18:53:32.035449] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.388 [2024-12-14 18:53:32.036761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:16.388 [2024-12-14 18:53:32.036834] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.647 [2024-12-14 18:53:32.177648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.647 [2024-12-14 18:53:32.248062] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.647 [2024-12-14 18:53:32.248147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.647 [2024-12-14 18:53:32.248159] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.647 [2024-12-14 18:53:32.248176] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.647 [2024-12-14 18:53:32.248183] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.647 [2024-12-14 18:53:32.248326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.647 [2024-12-14 18:53:32.248507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.647 [2024-12-14 18:53:32.248501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.647 [2024-12-14 18:53:32.363302] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.647 [2024-12-14 18:53:32.363354] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.647 [2024-12-14 18:53:32.363770] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.647 [2024-12-14 18:53:32.377052] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.906 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:17.165 [2024-12-14 18:53:32.749486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.165 18:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:17.423 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:17.423 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:17.682 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:17.682 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:18.249 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:18.249 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=95895f47-f6d0-46ff-9704-f0ba4a6e8aa8 00:33:18.249 18:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 95895f47-f6d0-46ff-9704-f0ba4a6e8aa8 lvol 20 00:33:18.815 18:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8045acb9-bcfc-4877-b95a-27d0e13332db 00:33:18.815 18:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:19.073 18:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8045acb9-bcfc-4877-b95a-27d0e13332db 00:33:19.332 18:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:19.589 [2024-12-14 18:53:35.113458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:19.589 18:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:19.851 18:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:19.851 18:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=122330 00:33:19.851 18:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:20.798 18:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8045acb9-bcfc-4877-b95a-27d0e13332db MY_SNAPSHOT 00:33:21.056 18:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bace411e-45bf-495d-b0eb-b93d4b29b029 00:33:21.056 18:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8045acb9-bcfc-4877-b95a-27d0e13332db 30 00:33:21.320 18:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bace411e-45bf-495d-b0eb-b93d4b29b029 MY_CLONE 00:33:21.580 18:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1d935400-087a-471b-a205-5657ed634436 00:33:21.580 18:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1d935400-087a-471b-a205-5657ed634436 00:33:22.147 18:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 122330 00:33:30.261 Initializing NVMe Controllers 00:33:30.261 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:33:30.261 Controller IO queue size 128, less than required. 00:33:30.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:30.261 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:30.261 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:30.261 Initialization complete. Launching workers. 00:33:30.261 ======================================================== 00:33:30.261 Latency(us) 00:33:30.261 Device Information : IOPS MiB/s Average min max 00:33:30.261 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12192.40 47.63 10502.74 4558.36 58105.54 00:33:30.261 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12486.60 48.78 10256.47 4983.29 64747.11 00:33:30.261 ======================================================== 00:33:30.261 Total : 24679.00 96.40 10378.14 4558.36 64747.11 00:33:30.261 00:33:30.261 18:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:30.261 18:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8045acb9-bcfc-4877-b95a-27d0e13332db 00:33:30.519 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95895f47-f6d0-46ff-9704-f0ba4a6e8aa8 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.778 rmmod nvme_tcp 00:33:30.778 rmmod nvme_fabrics 00:33:30.778 rmmod nvme_keyring 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 122197 ']' 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 122197 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 122197 ']' 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 122197 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122197 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:30.778 killing process with pid 122197 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122197' 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 122197 00:33:30.778 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 122197 00:33:31.036 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:31.036 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:31.036 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:31.036 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:31.037 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.295 18:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.295 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:33:31.295 00:33:31.295 real 0m15.683s 00:33:31.295 user 0m55.720s 00:33:31.295 sys 0m5.520s 00:33:31.295 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.295 ************************************ 00:33:31.295 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:31.295 END TEST nvmf_lvol 00:33:31.295 ************************************ 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:31.554 ************************************ 00:33:31.554 START TEST nvmf_lvs_grow 00:33:31.554 ************************************ 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:31.554 * Looking for test storage... 00:33:31.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.554 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:31.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.555 --rc genhtml_branch_coverage=1 00:33:31.555 --rc genhtml_function_coverage=1 00:33:31.555 --rc genhtml_legend=1 00:33:31.555 --rc geninfo_all_blocks=1 00:33:31.555 --rc geninfo_unexecuted_blocks=1 00:33:31.555 00:33:31.555 ' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:31.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.555 --rc genhtml_branch_coverage=1 00:33:31.555 --rc genhtml_function_coverage=1 00:33:31.555 --rc genhtml_legend=1 00:33:31.555 --rc geninfo_all_blocks=1 00:33:31.555 --rc geninfo_unexecuted_blocks=1 00:33:31.555 00:33:31.555 ' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:31.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.555 --rc genhtml_branch_coverage=1 00:33:31.555 --rc genhtml_function_coverage=1 00:33:31.555 --rc genhtml_legend=1 00:33:31.555 --rc geninfo_all_blocks=1 00:33:31.555 --rc geninfo_unexecuted_blocks=1 00:33:31.555 00:33:31.555 ' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:31.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.555 --rc genhtml_branch_coverage=1 00:33:31.555 --rc genhtml_function_coverage=1 00:33:31.555 --rc genhtml_legend=1 00:33:31.555 --rc geninfo_all_blocks=1 00:33:31.555 --rc geninfo_unexecuted_blocks=1 00:33:31.555 00:33:31.555 ' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.555 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.556 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:31.815 Cannot find device "nvmf_init_br" 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:31.815 Cannot find device "nvmf_init_br2" 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:31.815 Cannot find device "nvmf_tgt_br" 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:31.815 Cannot find device "nvmf_tgt_br2" 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:31.815 Cannot find device "nvmf_init_br" 00:33:31.815 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:31.816 Cannot find device "nvmf_init_br2" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:31.816 Cannot find device "nvmf_tgt_br" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:31.816 Cannot find device "nvmf_tgt_br2" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:31.816 Cannot find device "nvmf_br" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:31.816 Cannot find device "nvmf_init_if" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:31.816 Cannot find device "nvmf_init_if2" 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:31.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:31.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:31.816 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:32.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:32.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:33:32.075 00:33:32.075 --- 10.0.0.3 ping statistics --- 00:33:32.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.075 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:32.075 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:32.075 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:33:32.075 00:33:32.075 --- 10.0.0.4 ping statistics --- 00:33:32.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.075 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:32.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:33:32.075 00:33:32.075 --- 10.0.0.1 ping statistics --- 00:33:32.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.075 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:32.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:33:32.075 00:33:32.075 --- 10.0.0.2 ping statistics --- 00:33:32.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.075 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:33:32.075 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=122741 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 122741 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 122741 ']' 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:32.076 18:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:32.076 [2024-12-14 18:53:47.777020] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:32.076 [2024-12-14 18:53:47.778343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:32.076 [2024-12-14 18:53:47.779097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.334 [2024-12-14 18:53:47.915036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.334 [2024-12-14 18:53:47.990170] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.334 [2024-12-14 18:53:47.990229] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.334 [2024-12-14 18:53:47.990240] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.334 [2024-12-14 18:53:47.990247] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.334 [2024-12-14 18:53:47.990253] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.334 [2024-12-14 18:53:47.990283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.593 [2024-12-14 18:53:48.102310] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:32.593 [2024-12-14 18:53:48.102628] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.593 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:32.852 [2024-12-14 18:53:48.483125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 ************************************ 00:33:32.852 START TEST lvs_grow_clean 00:33:32.852 ************************************ 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:32.852 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:33.111 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:33.111 18:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:33.677 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 lvol 150 00:33:33.936 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebe5fa65-2505-4a74-b0f1-5a752756e6f9 00:33:33.936 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:33.936 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:34.194 [2024-12-14 18:53:49.902884] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:34.194 [2024-12-14 18:53:49.903021] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:34.194 true 00:33:34.194 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:34.194 18:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:34.452 18:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:34.452 18:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:34.723 18:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebe5fa65-2505-4a74-b0f1-5a752756e6f9 00:33:35.000 18:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:35.258 [2024-12-14 18:53:50.859563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:35.258 18:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=122884 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 122884 /var/tmp/bdevperf.sock 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 122884 ']' 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:35.517 18:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:35.517 [2024-12-14 18:53:51.194522] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:35.517 [2024-12-14 18:53:51.194629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122884 ] 00:33:35.776 [2024-12-14 18:53:51.329488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.776 [2024-12-14 18:53:51.414725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.711 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:36.711 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:33:36.711 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:36.711 Nvme0n1 00:33:36.970 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:36.970 [ 00:33:36.970 { 00:33:36.970 "aliases": [ 00:33:36.970 "ebe5fa65-2505-4a74-b0f1-5a752756e6f9" 00:33:36.970 ], 00:33:36.970 "assigned_rate_limits": { 00:33:36.970 "r_mbytes_per_sec": 0, 00:33:36.970 "rw_ios_per_sec": 0, 00:33:36.970 "rw_mbytes_per_sec": 0, 00:33:36.970 "w_mbytes_per_sec": 0 00:33:36.970 }, 00:33:36.970 "block_size": 4096, 00:33:36.970 "claimed": false, 00:33:36.970 "driver_specific": { 00:33:36.970 "mp_policy": "active_passive", 00:33:36.970 "nvme": [ 00:33:36.970 { 00:33:36.970 "ctrlr_data": { 00:33:36.970 "ana_reporting": false, 00:33:36.970 "cntlid": 1, 00:33:36.970 "firmware_revision": "24.09.1", 00:33:36.970 "model_number": "SPDK bdev Controller", 00:33:36.970 "multi_ctrlr": true, 00:33:36.970 "oacs": { 00:33:36.970 "firmware": 0, 00:33:36.970 "format": 0, 00:33:36.970 "ns_manage": 0, 00:33:36.970 "security": 0 00:33:36.970 }, 00:33:36.970 "serial_number": "SPDK0", 00:33:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.970 "vendor_id": "0x8086" 00:33:36.970 }, 00:33:36.970 "ns_data": { 00:33:36.970 "can_share": true, 00:33:36.970 "id": 1 00:33:36.970 }, 00:33:36.970 "trid": { 00:33:36.970 "adrfam": "IPv4", 00:33:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.970 "traddr": "10.0.0.3", 00:33:36.970 "trsvcid": "4420", 00:33:36.970 "trtype": "TCP" 00:33:36.970 }, 00:33:36.970 "vs": { 00:33:36.970 "nvme_version": "1.3" 00:33:36.970 } 00:33:36.970 } 00:33:36.970 ] 00:33:36.970 }, 00:33:36.970 "memory_domains": [ 00:33:36.970 { 00:33:36.970 "dma_device_id": "system", 00:33:36.970 "dma_device_type": 1 00:33:36.970 } 00:33:36.970 ], 00:33:36.970 "name": "Nvme0n1", 00:33:36.970 "num_blocks": 38912, 00:33:36.970 "numa_id": -1, 00:33:36.970 "product_name": "NVMe disk", 00:33:36.970 "supported_io_types": { 00:33:36.970 "abort": true, 00:33:36.970 "compare": true, 00:33:36.970 "compare_and_write": true, 00:33:36.970 "copy": true, 00:33:36.970 "flush": true, 00:33:36.970 "get_zone_info": false, 00:33:36.970 "nvme_admin": true, 00:33:36.970 "nvme_io": true, 00:33:36.970 "nvme_io_md": false, 00:33:36.970 "nvme_iov_md": false, 00:33:36.970 "read": true, 00:33:36.970 "reset": true, 00:33:36.970 "seek_data": false, 00:33:36.970 "seek_hole": false, 00:33:36.970 "unmap": true, 00:33:36.970 "write": true, 00:33:36.970 "write_zeroes": true, 00:33:36.970 "zcopy": false, 00:33:36.970 "zone_append": false, 00:33:36.970 "zone_management": false 00:33:36.970 }, 00:33:36.970 "uuid": "ebe5fa65-2505-4a74-b0f1-5a752756e6f9", 00:33:36.970 "zoned": false 00:33:36.970 } 00:33:36.970 ] 00:33:36.970 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:36.970 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=122932 00:33:36.970 18:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:37.229 Running I/O for 10 seconds... 00:33:38.164 Latency(us) 00:33:38.164 [2024-12-14T18:53:53.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.164 Nvme0n1 : 1.00 7797.00 30.46 0.00 0.00 0.00 0.00 0.00 00:33:38.164 [2024-12-14T18:53:53.917Z] =================================================================================================================== 00:33:38.164 [2024-12-14T18:53:53.917Z] Total : 7797.00 30.46 0.00 0.00 0.00 0.00 0.00 00:33:38.164 00:33:39.100 18:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:39.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.100 Nvme0n1 : 2.00 8119.00 31.71 0.00 0.00 0.00 0.00 0.00 00:33:39.100 [2024-12-14T18:53:54.853Z] =================================================================================================================== 00:33:39.100 [2024-12-14T18:53:54.853Z] Total : 8119.00 31.71 0.00 0.00 0.00 0.00 0.00 00:33:39.100 00:33:39.359 true 00:33:39.359 18:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:39.359 18:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:39.618 18:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:39.618 18:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:39.618 18:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 122932 00:33:40.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:40.184 Nvme0n1 : 3.00 8284.00 32.36 0.00 0.00 0.00 0.00 0.00 00:33:40.184 [2024-12-14T18:53:55.937Z] =================================================================================================================== 00:33:40.185 [2024-12-14T18:53:55.938Z] Total : 8284.00 32.36 0.00 0.00 0.00 0.00 0.00 00:33:40.185 00:33:41.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.120 Nvme0n1 : 4.00 8403.25 32.83 0.00 0.00 0.00 0.00 0.00 00:33:41.120 [2024-12-14T18:53:56.873Z] =================================================================================================================== 00:33:41.120 [2024-12-14T18:53:56.873Z] Total : 8403.25 32.83 0.00 0.00 0.00 0.00 0.00 00:33:41.120 00:33:42.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.088 Nvme0n1 : 5.00 8449.00 33.00 0.00 0.00 0.00 0.00 0.00 00:33:42.088 [2024-12-14T18:53:57.841Z] =================================================================================================================== 00:33:42.088 [2024-12-14T18:53:57.841Z] Total : 8449.00 33.00 0.00 0.00 0.00 0.00 0.00 00:33:42.088 00:33:43.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.056 Nvme0n1 : 6.00 8490.33 33.17 0.00 0.00 0.00 0.00 0.00 00:33:43.056 [2024-12-14T18:53:58.809Z] =================================================================================================================== 00:33:43.056 [2024-12-14T18:53:58.809Z] Total : 8490.33 33.17 0.00 0.00 0.00 0.00 0.00 00:33:43.056 00:33:44.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.431 Nvme0n1 : 7.00 8501.29 33.21 0.00 0.00 0.00 0.00 0.00 00:33:44.431 [2024-12-14T18:54:00.184Z] =================================================================================================================== 00:33:44.431 [2024-12-14T18:54:00.184Z] Total : 8501.29 33.21 0.00 0.00 0.00 0.00 0.00 00:33:44.431 00:33:45.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.367 Nvme0n1 : 8.00 8438.62 32.96 0.00 0.00 0.00 0.00 0.00 00:33:45.367 [2024-12-14T18:54:01.120Z] =================================================================================================================== 00:33:45.367 [2024-12-14T18:54:01.120Z] Total : 8438.62 32.96 0.00 0.00 0.00 0.00 0.00 00:33:45.367 00:33:46.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.303 Nvme0n1 : 9.00 8418.67 32.89 0.00 0.00 0.00 0.00 0.00 00:33:46.303 [2024-12-14T18:54:02.056Z] =================================================================================================================== 00:33:46.303 [2024-12-14T18:54:02.056Z] Total : 8418.67 32.89 0.00 0.00 0.00 0.00 0.00 00:33:46.303 00:33:47.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.238 Nvme0n1 : 10.00 8178.90 31.95 0.00 0.00 0.00 0.00 0.00 00:33:47.238 [2024-12-14T18:54:02.991Z] =================================================================================================================== 00:33:47.238 [2024-12-14T18:54:02.991Z] Total : 8178.90 31.95 0.00 0.00 0.00 0.00 0.00 00:33:47.238 00:33:47.238 00:33:47.238 Latency(us) 00:33:47.238 [2024-12-14T18:54:02.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.238 Nvme0n1 : 10.02 8175.42 31.94 0.00 0.00 15639.77 6702.55 29193.31 00:33:47.238 [2024-12-14T18:54:02.992Z] =================================================================================================================== 00:33:47.239 [2024-12-14T18:54:02.992Z] Total : 8175.42 31.94 0.00 0.00 15639.77 6702.55 29193.31 00:33:47.239 { 00:33:47.239 "results": [ 00:33:47.239 { 00:33:47.239 "job": "Nvme0n1", 00:33:47.239 "core_mask": "0x2", 00:33:47.239 "workload": "randwrite", 00:33:47.239 "status": "finished", 00:33:47.239 "queue_depth": 128, 00:33:47.239 "io_size": 4096, 00:33:47.239 "runtime": 10.018929, 00:33:47.239 "iops": 8175.42473851247, 00:33:47.239 "mibps": 31.935252884814336, 00:33:47.239 "io_failed": 0, 00:33:47.239 "io_timeout": 0, 00:33:47.239 "avg_latency_us": 15639.765520716448, 00:33:47.239 "min_latency_us": 6702.545454545455, 00:33:47.239 "max_latency_us": 29193.30909090909 00:33:47.239 } 00:33:47.239 ], 00:33:47.239 "core_count": 1 00:33:47.239 } 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 122884 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 122884 ']' 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 122884 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122884 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:47.239 killing process with pid 122884 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122884' 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 122884 00:33:47.239 Received shutdown signal, test time was about 10.000000 seconds 00:33:47.239 00:33:47.239 Latency(us) 00:33:47.239 [2024-12-14T18:54:02.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.239 [2024-12-14T18:54:02.992Z] =================================================================================================================== 00:33:47.239 [2024-12-14T18:54:02.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.239 18:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 122884 00:33:47.497 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:47.756 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:48.014 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:48.014 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:48.273 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:48.273 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:48.273 18:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:48.532 [2024-12-14 18:54:04.147856] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:48.532 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:48.791 2024/12/14 18:54:04 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c0c27afa-57b1-4637-8b28-88b5bb0c0786], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:33:48.791 request: 00:33:48.791 { 00:33:48.791 "method": "bdev_lvol_get_lvstores", 00:33:48.791 "params": { 00:33:48.791 "uuid": "c0c27afa-57b1-4637-8b28-88b5bb0c0786" 00:33:48.791 } 00:33:48.791 } 00:33:48.791 Got JSON-RPC error response 00:33:48.791 GoRPCClient: error on JSON-RPC call 00:33:48.791 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:33:48.791 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:48.791 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:48.791 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:48.791 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:49.050 aio_bdev 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebe5fa65-2505-4a74-b0f1-5a752756e6f9 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ebe5fa65-2505-4a74-b0f1-5a752756e6f9 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:49.050 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:49.309 18:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebe5fa65-2505-4a74-b0f1-5a752756e6f9 -t 2000 00:33:49.566 [ 00:33:49.566 { 00:33:49.567 "aliases": [ 00:33:49.567 "lvs/lvol" 00:33:49.567 ], 00:33:49.567 "assigned_rate_limits": { 00:33:49.567 "r_mbytes_per_sec": 0, 00:33:49.567 "rw_ios_per_sec": 0, 00:33:49.567 "rw_mbytes_per_sec": 0, 00:33:49.567 "w_mbytes_per_sec": 0 00:33:49.567 }, 00:33:49.567 "block_size": 4096, 00:33:49.567 "claimed": false, 00:33:49.567 "driver_specific": { 00:33:49.567 "lvol": { 00:33:49.567 "base_bdev": "aio_bdev", 00:33:49.567 "clone": false, 00:33:49.567 "esnap_clone": false, 00:33:49.567 "lvol_store_uuid": "c0c27afa-57b1-4637-8b28-88b5bb0c0786", 00:33:49.567 "num_allocated_clusters": 38, 00:33:49.567 "snapshot": false, 00:33:49.567 "thin_provision": false 00:33:49.567 } 00:33:49.567 }, 00:33:49.567 "name": "ebe5fa65-2505-4a74-b0f1-5a752756e6f9", 00:33:49.567 "num_blocks": 38912, 00:33:49.567 "product_name": "Logical Volume", 00:33:49.567 "supported_io_types": { 00:33:49.567 "abort": false, 00:33:49.567 "compare": false, 00:33:49.567 "compare_and_write": false, 00:33:49.567 "copy": false, 00:33:49.567 "flush": false, 00:33:49.567 "get_zone_info": false, 00:33:49.567 "nvme_admin": false, 00:33:49.567 "nvme_io": false, 00:33:49.567 "nvme_io_md": false, 00:33:49.567 "nvme_iov_md": false, 00:33:49.567 "read": true, 00:33:49.567 "reset": true, 00:33:49.567 "seek_data": true, 00:33:49.567 "seek_hole": true, 00:33:49.567 "unmap": true, 00:33:49.567 "write": true, 00:33:49.567 "write_zeroes": true, 00:33:49.567 "zcopy": false, 00:33:49.567 "zone_append": false, 00:33:49.567 "zone_management": false 00:33:49.567 }, 00:33:49.567 "uuid": "ebe5fa65-2505-4a74-b0f1-5a752756e6f9", 00:33:49.567 "zoned": false 00:33:49.567 } 00:33:49.567 ] 00:33:49.567 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:33:49.567 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:49.567 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:49.824 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:49.824 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:49.824 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:50.082 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:50.082 18:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ebe5fa65-2505-4a74-b0f1-5a752756e6f9 00:33:50.341 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0c27afa-57b1-4637-8b28-88b5bb0c0786 00:33:50.599 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:50.858 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:51.425 ************************************ 00:33:51.425 END TEST lvs_grow_clean 00:33:51.425 ************************************ 00:33:51.425 00:33:51.425 real 0m18.371s 00:33:51.425 user 0m17.606s 00:33:51.425 sys 0m2.292s 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.425 ************************************ 00:33:51.425 START TEST lvs_grow_dirty 00:33:51.425 ************************************ 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:51.425 18:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:51.684 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:51.684 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:51.943 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=55927ce5-4380-4062-8998-f2eca2eb0204 00:33:51.943 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:33:51.943 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:52.201 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:52.201 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:52.201 18:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55927ce5-4380-4062-8998-f2eca2eb0204 lvol 150 00:33:52.460 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:33:52.460 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:52.460 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:52.718 [2024-12-14 18:54:08.330884] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:52.718 [2024-12-14 18:54:08.331029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:52.718 true 00:33:52.718 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:52.718 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:33:52.977 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:52.977 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:53.235 18:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:33:53.493 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:53.752 [2024-12-14 18:54:09.487387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:54.010 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=123325 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 123325 /var/tmp/bdevperf.sock 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 123325 ']' 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.269 18:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:54.269 [2024-12-14 18:54:09.822029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:54.269 [2024-12-14 18:54:09.822104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123325 ] 00:33:54.269 [2024-12-14 18:54:09.953100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.527 [2024-12-14 18:54:10.041515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.094 18:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:55.094 18:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:33:55.094 18:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:55.352 Nvme0n1 00:33:55.352 18:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:55.611 [ 00:33:55.611 { 00:33:55.611 "aliases": [ 00:33:55.611 "31a3ff50-d8cb-402e-8d25-b9e3d03e3298" 00:33:55.611 ], 00:33:55.611 "assigned_rate_limits": { 00:33:55.611 "r_mbytes_per_sec": 0, 00:33:55.611 "rw_ios_per_sec": 0, 00:33:55.611 "rw_mbytes_per_sec": 0, 00:33:55.611 "w_mbytes_per_sec": 0 00:33:55.611 }, 00:33:55.611 "block_size": 4096, 00:33:55.611 "claimed": false, 00:33:55.611 "driver_specific": { 00:33:55.611 "mp_policy": "active_passive", 00:33:55.611 "nvme": [ 00:33:55.611 { 00:33:55.611 "ctrlr_data": { 00:33:55.611 "ana_reporting": false, 00:33:55.611 "cntlid": 1, 00:33:55.611 "firmware_revision": "24.09.1", 00:33:55.611 "model_number": "SPDK bdev Controller", 00:33:55.611 "multi_ctrlr": true, 00:33:55.611 "oacs": { 00:33:55.611 "firmware": 0, 00:33:55.611 "format": 0, 00:33:55.611 "ns_manage": 0, 00:33:55.611 "security": 0 00:33:55.611 }, 00:33:55.611 "serial_number": "SPDK0", 00:33:55.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:55.611 "vendor_id": "0x8086" 00:33:55.611 }, 00:33:55.611 "ns_data": { 00:33:55.611 "can_share": true, 00:33:55.611 "id": 1 00:33:55.611 }, 00:33:55.611 "trid": { 00:33:55.611 "adrfam": "IPv4", 00:33:55.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:55.611 "traddr": "10.0.0.3", 00:33:55.612 "trsvcid": "4420", 00:33:55.612 "trtype": "TCP" 00:33:55.612 }, 00:33:55.612 "vs": { 00:33:55.612 "nvme_version": "1.3" 00:33:55.612 } 00:33:55.612 } 00:33:55.612 ] 00:33:55.612 }, 00:33:55.612 "memory_domains": [ 00:33:55.612 { 00:33:55.612 "dma_device_id": "system", 00:33:55.612 "dma_device_type": 1 00:33:55.612 } 00:33:55.612 ], 00:33:55.612 "name": "Nvme0n1", 00:33:55.612 "num_blocks": 38912, 00:33:55.612 "numa_id": -1, 00:33:55.612 "product_name": "NVMe disk", 00:33:55.612 "supported_io_types": { 00:33:55.612 "abort": true, 00:33:55.612 "compare": true, 00:33:55.612 "compare_and_write": true, 00:33:55.612 "copy": true, 00:33:55.612 "flush": true, 00:33:55.612 "get_zone_info": false, 00:33:55.612 "nvme_admin": true, 00:33:55.612 "nvme_io": true, 00:33:55.612 "nvme_io_md": false, 00:33:55.612 "nvme_iov_md": false, 00:33:55.612 "read": true, 00:33:55.612 "reset": true, 00:33:55.612 "seek_data": false, 00:33:55.612 "seek_hole": false, 00:33:55.612 "unmap": true, 00:33:55.612 "write": true, 00:33:55.612 "write_zeroes": true, 00:33:55.612 "zcopy": false, 00:33:55.612 "zone_append": false, 00:33:55.612 "zone_management": false 00:33:55.612 }, 00:33:55.612 "uuid": "31a3ff50-d8cb-402e-8d25-b9e3d03e3298", 00:33:55.612 "zoned": false 00:33:55.612 } 00:33:55.612 ] 00:33:55.612 18:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:55.612 18:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=123367 00:33:55.612 18:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:55.870 Running I/O for 10 seconds... 00:33:56.805 Latency(us) 00:33:56.805 [2024-12-14T18:54:12.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.805 Nvme0n1 : 1.00 6590.00 25.74 0.00 0.00 0.00 0.00 0.00 00:33:56.805 [2024-12-14T18:54:12.558Z] =================================================================================================================== 00:33:56.805 [2024-12-14T18:54:12.558Z] Total : 6590.00 25.74 0.00 0.00 0.00 0.00 0.00 00:33:56.805 00:33:57.741 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:33:57.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:57.741 Nvme0n1 : 2.00 6574.50 25.68 0.00 0.00 0.00 0.00 0.00 00:33:57.741 [2024-12-14T18:54:13.494Z] =================================================================================================================== 00:33:57.741 [2024-12-14T18:54:13.494Z] Total : 6574.50 25.68 0.00 0.00 0.00 0.00 0.00 00:33:57.741 00:33:58.000 true 00:33:58.000 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:33:58.000 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:58.259 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:58.259 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:58.259 18:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 123367 00:33:58.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:58.826 Nvme0n1 : 3.00 6604.67 25.80 0.00 0.00 0.00 0.00 0.00 00:33:58.826 [2024-12-14T18:54:14.579Z] =================================================================================================================== 00:33:58.826 [2024-12-14T18:54:14.579Z] Total : 6604.67 25.80 0.00 0.00 0.00 0.00 0.00 00:33:58.826 00:33:59.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.762 Nvme0n1 : 4.00 6695.25 26.15 0.00 0.00 0.00 0.00 0.00 00:33:59.762 [2024-12-14T18:54:15.516Z] =================================================================================================================== 00:33:59.763 [2024-12-14T18:54:15.516Z] Total : 6695.25 26.15 0.00 0.00 0.00 0.00 0.00 00:33:59.763 00:34:00.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.698 Nvme0n1 : 5.00 6743.40 26.34 0.00 0.00 0.00 0.00 0.00 00:34:00.698 [2024-12-14T18:54:16.451Z] =================================================================================================================== 00:34:00.698 [2024-12-14T18:54:16.451Z] Total : 6743.40 26.34 0.00 0.00 0.00 0.00 0.00 00:34:00.698 00:34:02.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.075 Nvme0n1 : 6.00 6774.00 26.46 0.00 0.00 0.00 0.00 0.00 00:34:02.075 [2024-12-14T18:54:17.828Z] =================================================================================================================== 00:34:02.075 [2024-12-14T18:54:17.828Z] Total : 6774.00 26.46 0.00 0.00 0.00 0.00 0.00 00:34:02.075 00:34:02.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.663 Nvme0n1 : 7.00 6648.71 25.97 0.00 0.00 0.00 0.00 0.00 00:34:02.663 [2024-12-14T18:54:18.416Z] =================================================================================================================== 00:34:02.663 [2024-12-14T18:54:18.416Z] Total : 6648.71 25.97 0.00 0.00 0.00 0.00 0.00 00:34:02.663 00:34:04.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.055 Nvme0n1 : 8.00 6650.62 25.98 0.00 0.00 0.00 0.00 0.00 00:34:04.055 [2024-12-14T18:54:19.808Z] =================================================================================================================== 00:34:04.055 [2024-12-14T18:54:19.808Z] Total : 6650.62 25.98 0.00 0.00 0.00 0.00 0.00 00:34:04.055 00:34:04.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.991 Nvme0n1 : 9.00 6652.11 25.98 0.00 0.00 0.00 0.00 0.00 00:34:04.991 [2024-12-14T18:54:20.744Z] =================================================================================================================== 00:34:04.991 [2024-12-14T18:54:20.744Z] Total : 6652.11 25.98 0.00 0.00 0.00 0.00 0.00 00:34:04.991 00:34:05.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.928 Nvme0n1 : 10.00 6663.70 26.03 0.00 0.00 0.00 0.00 0.00 00:34:05.928 [2024-12-14T18:54:21.681Z] =================================================================================================================== 00:34:05.928 [2024-12-14T18:54:21.681Z] Total : 6663.70 26.03 0.00 0.00 0.00 0.00 0.00 00:34:05.928 00:34:05.928 00:34:05.928 Latency(us) 00:34:05.928 [2024-12-14T18:54:21.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.928 Nvme0n1 : 10.02 6663.59 26.03 0.00 0.00 19187.34 5749.29 141081.13 00:34:05.928 [2024-12-14T18:54:21.681Z] =================================================================================================================== 00:34:05.928 [2024-12-14T18:54:21.681Z] Total : 6663.59 26.03 0.00 0.00 19187.34 5749.29 141081.13 00:34:05.928 { 00:34:05.928 "results": [ 00:34:05.928 { 00:34:05.928 "job": "Nvme0n1", 00:34:05.928 "core_mask": "0x2", 00:34:05.928 "workload": "randwrite", 00:34:05.928 "status": "finished", 00:34:05.928 "queue_depth": 128, 00:34:05.928 "io_size": 4096, 00:34:05.928 "runtime": 10.019369, 00:34:05.928 "iops": 6663.5932861640285, 00:34:05.928 "mibps": 26.029661274078236, 00:34:05.928 "io_failed": 0, 00:34:05.928 "io_timeout": 0, 00:34:05.928 "avg_latency_us": 19187.339287786876, 00:34:05.928 "min_latency_us": 5749.294545454545, 00:34:05.928 "max_latency_us": 141081.13454545455 00:34:05.928 } 00:34:05.928 ], 00:34:05.928 "core_count": 1 00:34:05.928 } 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 123325 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 123325 ']' 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 123325 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123325 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:05.928 killing process with pid 123325 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123325' 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 123325 00:34:05.928 Received shutdown signal, test time was about 10.000000 seconds 00:34:05.928 00:34:05.928 Latency(us) 00:34:05.928 [2024-12-14T18:54:21.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.928 [2024-12-14T18:54:21.681Z] =================================================================================================================== 00:34:05.928 [2024-12-14T18:54:21.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:05.928 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 123325 00:34:06.187 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:06.445 18:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.704 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:06.704 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 122741 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 122741 00:34:06.963 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 122741 Killed "${NVMF_APP[@]}" "$@" 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=123517 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 123517 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 123517 ']' 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.963 18:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:06.963 [2024-12-14 18:54:22.557515] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:06.963 [2024-12-14 18:54:22.558791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:06.963 [2024-12-14 18:54:22.558867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.963 [2024-12-14 18:54:22.700418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.223 [2024-12-14 18:54:22.780793] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.223 [2024-12-14 18:54:22.780860] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.223 [2024-12-14 18:54:22.780874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.223 [2024-12-14 18:54:22.780884] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.223 [2024-12-14 18:54:22.780894] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.223 [2024-12-14 18:54:22.780936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.223 [2024-12-14 18:54:22.873021] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.223 [2024-12-14 18:54:22.873402] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:08.159 [2024-12-14 18:54:23.831137] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:08.159 [2024-12-14 18:54:23.832347] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:08.159 [2024-12-14 18:54:23.832852] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:08.159 18:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:08.418 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 -t 2000 00:34:08.676 [ 00:34:08.676 { 00:34:08.676 "aliases": [ 00:34:08.676 "lvs/lvol" 00:34:08.676 ], 00:34:08.676 "assigned_rate_limits": { 00:34:08.676 "r_mbytes_per_sec": 0, 00:34:08.676 "rw_ios_per_sec": 0, 00:34:08.676 "rw_mbytes_per_sec": 0, 00:34:08.676 "w_mbytes_per_sec": 0 00:34:08.676 }, 00:34:08.676 "block_size": 4096, 00:34:08.676 "claimed": false, 00:34:08.676 "driver_specific": { 00:34:08.676 "lvol": { 00:34:08.676 "base_bdev": "aio_bdev", 00:34:08.676 "clone": false, 00:34:08.676 "esnap_clone": false, 00:34:08.676 "lvol_store_uuid": "55927ce5-4380-4062-8998-f2eca2eb0204", 00:34:08.676 "num_allocated_clusters": 38, 00:34:08.676 "snapshot": false, 00:34:08.676 "thin_provision": false 00:34:08.676 } 00:34:08.676 }, 00:34:08.676 "name": "31a3ff50-d8cb-402e-8d25-b9e3d03e3298", 00:34:08.676 "num_blocks": 38912, 00:34:08.676 "product_name": "Logical Volume", 00:34:08.676 "supported_io_types": { 00:34:08.676 "abort": false, 00:34:08.676 "compare": false, 00:34:08.676 "compare_and_write": false, 00:34:08.676 "copy": false, 00:34:08.676 "flush": false, 00:34:08.676 "get_zone_info": false, 00:34:08.676 "nvme_admin": false, 00:34:08.676 "nvme_io": false, 00:34:08.676 "nvme_io_md": false, 00:34:08.676 "nvme_iov_md": false, 00:34:08.676 "read": true, 00:34:08.676 "reset": true, 00:34:08.676 "seek_data": true, 00:34:08.676 "seek_hole": true, 00:34:08.676 "unmap": true, 00:34:08.676 "write": true, 00:34:08.676 "write_zeroes": true, 00:34:08.676 "zcopy": false, 00:34:08.676 "zone_append": false, 00:34:08.676 "zone_management": false 00:34:08.676 }, 00:34:08.676 "uuid": "31a3ff50-d8cb-402e-8d25-b9e3d03e3298", 00:34:08.676 "zoned": false 00:34:08.676 } 00:34:08.676 ] 00:34:08.676 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:08.676 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:08.676 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:08.935 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:08.935 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:08.935 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:09.193 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:09.193 18:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:09.451 [2024-12-14 18:54:25.137781] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:09.451 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:09.452 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:10.017 2024/12/14 18:54:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:55927ce5-4380-4062-8998-f2eca2eb0204], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:34:10.017 request: 00:34:10.017 { 00:34:10.017 "method": "bdev_lvol_get_lvstores", 00:34:10.017 "params": { 00:34:10.017 "uuid": "55927ce5-4380-4062-8998-f2eca2eb0204" 00:34:10.017 } 00:34:10.017 } 00:34:10.017 Got JSON-RPC error response 00:34:10.017 GoRPCClient: error on JSON-RPC call 00:34:10.017 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:34:10.017 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:10.017 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:10.017 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:10.017 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:10.276 aio_bdev 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:10.276 18:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:10.276 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 -t 2000 00:34:10.535 [ 00:34:10.535 { 00:34:10.535 "aliases": [ 00:34:10.535 "lvs/lvol" 00:34:10.535 ], 00:34:10.535 "assigned_rate_limits": { 00:34:10.535 "r_mbytes_per_sec": 0, 00:34:10.535 "rw_ios_per_sec": 0, 00:34:10.535 "rw_mbytes_per_sec": 0, 00:34:10.535 "w_mbytes_per_sec": 0 00:34:10.535 }, 00:34:10.535 "block_size": 4096, 00:34:10.535 "claimed": false, 00:34:10.535 "driver_specific": { 00:34:10.535 "lvol": { 00:34:10.535 "base_bdev": "aio_bdev", 00:34:10.535 "clone": false, 00:34:10.535 "esnap_clone": false, 00:34:10.535 "lvol_store_uuid": "55927ce5-4380-4062-8998-f2eca2eb0204", 00:34:10.535 "num_allocated_clusters": 38, 00:34:10.535 "snapshot": false, 00:34:10.535 "thin_provision": false 00:34:10.535 } 00:34:10.535 }, 00:34:10.535 "name": "31a3ff50-d8cb-402e-8d25-b9e3d03e3298", 00:34:10.535 "num_blocks": 38912, 00:34:10.535 "product_name": "Logical Volume", 00:34:10.535 "supported_io_types": { 00:34:10.535 "abort": false, 00:34:10.535 "compare": false, 00:34:10.535 "compare_and_write": false, 00:34:10.535 "copy": false, 00:34:10.535 "flush": false, 00:34:10.535 "get_zone_info": false, 00:34:10.535 "nvme_admin": false, 00:34:10.535 "nvme_io": false, 00:34:10.535 "nvme_io_md": false, 00:34:10.535 "nvme_iov_md": false, 00:34:10.535 "read": true, 00:34:10.535 "reset": true, 00:34:10.535 "seek_data": true, 00:34:10.535 "seek_hole": true, 00:34:10.535 "unmap": true, 00:34:10.535 "write": true, 00:34:10.535 "write_zeroes": true, 00:34:10.535 "zcopy": false, 00:34:10.535 "zone_append": false, 00:34:10.535 "zone_management": false 00:34:10.535 }, 00:34:10.535 "uuid": "31a3ff50-d8cb-402e-8d25-b9e3d03e3298", 00:34:10.535 "zoned": false 00:34:10.535 } 00:34:10.535 ] 00:34:10.535 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:10.535 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:10.535 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:10.794 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:10.794 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:10.794 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:11.053 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:11.053 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 31a3ff50-d8cb-402e-8d25-b9e3d03e3298 00:34:11.312 18:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55927ce5-4380-4062-8998-f2eca2eb0204 00:34:11.571 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:11.830 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:12.088 00:34:12.088 real 0m20.835s 00:34:12.088 user 0m27.326s 00:34:12.088 sys 0m9.389s 00:34:12.088 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.088 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:12.088 ************************************ 00:34:12.088 END TEST lvs_grow_dirty 00:34:12.088 ************************************ 00:34:12.088 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:12.088 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:34:12.089 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:12.089 nvmf_trace.0 00:34:12.347 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:34:12.347 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:12.347 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:12.347 18:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.725 rmmod nvme_tcp 00:34:13.725 rmmod nvme_fabrics 00:34:13.725 rmmod nvme_keyring 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 123517 ']' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 123517 ']' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:13.725 killing process with pid 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123517' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 123517 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:13.725 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:34:13.984 ************************************ 00:34:13.984 END TEST nvmf_lvs_grow 00:34:13.984 ************************************ 00:34:13.984 00:34:13.984 real 0m42.537s 00:34:13.984 user 0m46.249s 00:34:13.984 sys 0m13.587s 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 ************************************ 00:34:13.984 START TEST nvmf_bdev_io_wait 00:34:13.984 ************************************ 00:34:13.984 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:14.244 * Looking for test storage... 00:34:14.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.244 --rc genhtml_branch_coverage=1 00:34:14.244 --rc genhtml_function_coverage=1 00:34:14.244 --rc genhtml_legend=1 00:34:14.244 --rc geninfo_all_blocks=1 00:34:14.244 --rc geninfo_unexecuted_blocks=1 00:34:14.244 00:34:14.244 ' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.244 --rc genhtml_branch_coverage=1 00:34:14.244 --rc genhtml_function_coverage=1 00:34:14.244 --rc genhtml_legend=1 00:34:14.244 --rc geninfo_all_blocks=1 00:34:14.244 --rc geninfo_unexecuted_blocks=1 00:34:14.244 00:34:14.244 ' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.244 --rc genhtml_branch_coverage=1 00:34:14.244 --rc genhtml_function_coverage=1 00:34:14.244 --rc genhtml_legend=1 00:34:14.244 --rc geninfo_all_blocks=1 00:34:14.244 --rc geninfo_unexecuted_blocks=1 00:34:14.244 00:34:14.244 ' 00:34:14.244 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.244 --rc genhtml_branch_coverage=1 00:34:14.244 --rc genhtml_function_coverage=1 00:34:14.244 --rc genhtml_legend=1 00:34:14.244 --rc geninfo_all_blocks=1 00:34:14.244 --rc geninfo_unexecuted_blocks=1 00:34:14.244 00:34:14.244 ' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:14.245 Cannot find device "nvmf_init_br" 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:14.245 Cannot find device "nvmf_init_br2" 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:14.245 Cannot find device "nvmf_tgt_br" 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:14.245 Cannot find device "nvmf_tgt_br2" 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:14.245 Cannot find device "nvmf_init_br" 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:34:14.245 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:14.245 Cannot find device "nvmf_init_br2" 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:14.246 Cannot find device "nvmf_tgt_br" 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:14.246 Cannot find device "nvmf_tgt_br2" 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:34:14.246 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:14.505 Cannot find device "nvmf_br" 00:34:14.505 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:34:14.505 18:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:14.505 Cannot find device "nvmf_init_if" 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:14.505 Cannot find device "nvmf_init_if2" 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:14.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:14.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:14.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:14.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:34:14.505 00:34:14.505 --- 10.0.0.3 ping statistics --- 00:34:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.505 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:14.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:14.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:34:14.505 00:34:14.505 --- 10.0.0.4 ping statistics --- 00:34:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.505 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:14.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:34:14.505 00:34:14.505 --- 10.0.0.1 ping statistics --- 00:34:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.505 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:34:14.505 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:14.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:34:14.765 00:34:14.765 --- 10.0.0.2 ping statistics --- 00:34:14.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.765 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=124002 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 124002 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 124002 ']' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:14.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:14.765 18:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:14.765 [2024-12-14 18:54:30.355234] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.765 [2024-12-14 18:54:30.356541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:14.765 [2024-12-14 18:54:30.356627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.765 [2024-12-14 18:54:30.500834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.024 [2024-12-14 18:54:30.578560] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.024 [2024-12-14 18:54:30.578636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.024 [2024-12-14 18:54:30.578651] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.024 [2024-12-14 18:54:30.578662] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.024 [2024-12-14 18:54:30.578671] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.024 [2024-12-14 18:54:30.579672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.024 [2024-12-14 18:54:30.579920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.024 [2024-12-14 18:54:30.579827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.024 [2024-12-14 18:54:30.579913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.024 [2024-12-14 18:54:30.580469] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 [2024-12-14 18:54:31.516881] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.962 [2024-12-14 18:54:31.517655] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:15.962 [2024-12-14 18:54:31.519246] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:15.962 [2024-12-14 18:54:31.519277] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 [2024-12-14 18:54:31.528813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 Malloc0 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:15.962 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.963 [2024-12-14 18:54:31.609015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=124055 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=124057 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:15.963 { 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme$subsystem", 00:34:15.963 "trtype": "$TEST_TRANSPORT", 00:34:15.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "$NVMF_PORT", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.963 "hdgst": ${hdgst:-false}, 00:34:15.963 "ddgst": ${ddgst:-false} 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 } 00:34:15.963 EOF 00:34:15.963 )") 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:15.963 { 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme$subsystem", 00:34:15.963 "trtype": "$TEST_TRANSPORT", 00:34:15.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "$NVMF_PORT", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.963 "hdgst": ${hdgst:-false}, 00:34:15.963 "ddgst": ${ddgst:-false} 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 } 00:34:15.963 EOF 00:34:15.963 )") 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=124059 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=124063 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:15.963 { 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme$subsystem", 00:34:15.963 "trtype": "$TEST_TRANSPORT", 00:34:15.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "$NVMF_PORT", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.963 "hdgst": ${hdgst:-false}, 00:34:15.963 "ddgst": ${ddgst:-false} 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 } 00:34:15.963 EOF 00:34:15.963 )") 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme1", 00:34:15.963 "trtype": "tcp", 00:34:15.963 "traddr": "10.0.0.3", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "4420", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.963 "hdgst": false, 00:34:15.963 "ddgst": false 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 }' 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:15.963 { 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme$subsystem", 00:34:15.963 "trtype": "$TEST_TRANSPORT", 00:34:15.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "$NVMF_PORT", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.963 "hdgst": ${hdgst:-false}, 00:34:15.963 "ddgst": ${ddgst:-false} 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 } 00:34:15.963 EOF 00:34:15.963 )") 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme1", 00:34:15.963 "trtype": "tcp", 00:34:15.963 "traddr": "10.0.0.3", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "4420", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.963 "hdgst": false, 00:34:15.963 "ddgst": false 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 }' 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme1", 00:34:15.963 "trtype": "tcp", 00:34:15.963 "traddr": "10.0.0.3", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "4420", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.963 "hdgst": false, 00:34:15.963 "ddgst": false 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 }' 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:15.963 "params": { 00:34:15.963 "name": "Nvme1", 00:34:15.963 "trtype": "tcp", 00:34:15.963 "traddr": "10.0.0.3", 00:34:15.963 "adrfam": "ipv4", 00:34:15.963 "trsvcid": "4420", 00:34:15.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.963 "hdgst": false, 00:34:15.963 "ddgst": false 00:34:15.963 }, 00:34:15.963 "method": "bdev_nvme_attach_controller" 00:34:15.963 }' 00:34:15.963 [2024-12-14 18:54:31.672476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:15.963 [2024-12-14 18:54:31.672553] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:15.963 [2024-12-14 18:54:31.679132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:15.963 [2024-12-14 18:54:31.679229] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:15.963 18:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 124055 00:34:16.223 [2024-12-14 18:54:31.729175] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:16.223 [2024-12-14 18:54:31.729344] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:16.223 [2024-12-14 18:54:31.741485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:16.223 [2024-12-14 18:54:31.741650] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:16.223 [2024-12-14 18:54:31.880556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.223 [2024-12-14 18:54:31.956225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.223 [2024-12-14 18:54:31.970237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.481 [2024-12-14 18:54:32.028651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.481 [2024-12-14 18:54:32.046625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.481 [2024-12-14 18:54:32.124826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.482 [2024-12-14 18:54:32.126245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.482 Running I/O for 1 seconds... 00:34:16.482 [2024-12-14 18:54:32.203150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.482 Running I/O for 1 seconds... 00:34:16.740 Running I/O for 1 seconds... 00:34:16.740 Running I/O for 1 seconds... 00:34:17.677 6749.00 IOPS, 26.36 MiB/s 00:34:17.677 Latency(us) 00:34:17.677 [2024-12-14T18:54:33.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.677 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:17.677 Nvme1n1 : 1.01 6795.07 26.54 0.00 0.00 18717.82 4200.26 32172.22 00:34:17.677 [2024-12-14T18:54:33.430Z] =================================================================================================================== 00:34:17.677 [2024-12-14T18:54:33.430Z] Total : 6795.07 26.54 0.00 0.00 18717.82 4200.26 32172.22 00:34:17.677 3744.00 IOPS, 14.62 MiB/s 00:34:17.677 Latency(us) 00:34:17.677 [2024-12-14T18:54:33.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.677 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:17.677 Nvme1n1 : 1.03 3763.59 14.70 0.00 0.00 33444.89 5808.87 54335.30 00:34:17.677 [2024-12-14T18:54:33.430Z] =================================================================================================================== 00:34:17.677 [2024-12-14T18:54:33.430Z] Total : 3763.59 14.70 0.00 0.00 33444.89 5808.87 54335.30 00:34:17.677 3771.00 IOPS, 14.73 MiB/s 00:34:17.677 Latency(us) 00:34:17.677 [2024-12-14T18:54:33.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.677 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:17.677 Nvme1n1 : 1.01 3894.94 15.21 0.00 0.00 32724.08 6642.97 61484.68 00:34:17.677 [2024-12-14T18:54:33.430Z] =================================================================================================================== 00:34:17.677 [2024-12-14T18:54:33.430Z] Total : 3894.94 15.21 0.00 0.00 32724.08 6642.97 61484.68 00:34:17.677 209608.00 IOPS, 818.78 MiB/s 00:34:17.677 Latency(us) 00:34:17.677 [2024-12-14T18:54:33.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.677 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:17.677 Nvme1n1 : 1.00 209268.82 817.46 0.00 0.00 608.47 273.69 1727.77 00:34:17.677 [2024-12-14T18:54:33.430Z] =================================================================================================================== 00:34:17.677 [2024-12-14T18:54:33.430Z] Total : 209268.82 817.46 0.00 0.00 608.47 273.69 1727.77 00:34:17.936 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 124057 00:34:17.936 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 124059 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 124063 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.194 rmmod nvme_tcp 00:34:18.194 rmmod nvme_fabrics 00:34:18.194 rmmod nvme_keyring 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 124002 ']' 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 124002 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 124002 ']' 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 124002 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124002 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:18.194 killing process with pid 124002 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124002' 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 124002 00:34:18.194 18:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 124002 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:18.453 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:34:18.712 00:34:18.712 real 0m4.689s 00:34:18.712 user 0m14.588s 00:34:18.712 sys 0m2.571s 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.712 ************************************ 00:34:18.712 END TEST nvmf_bdev_io_wait 00:34:18.712 ************************************ 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:18.712 ************************************ 00:34:18.712 START TEST nvmf_queue_depth 00:34:18.712 ************************************ 00:34:18.712 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:18.972 * Looking for test storage... 00:34:18.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.972 --rc genhtml_branch_coverage=1 00:34:18.972 --rc genhtml_function_coverage=1 00:34:18.972 --rc genhtml_legend=1 00:34:18.972 --rc geninfo_all_blocks=1 00:34:18.972 --rc geninfo_unexecuted_blocks=1 00:34:18.972 00:34:18.972 ' 00:34:18.972 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.973 --rc genhtml_branch_coverage=1 00:34:18.973 --rc genhtml_function_coverage=1 00:34:18.973 --rc genhtml_legend=1 00:34:18.973 --rc geninfo_all_blocks=1 00:34:18.973 --rc geninfo_unexecuted_blocks=1 00:34:18.973 00:34:18.973 ' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.973 --rc genhtml_branch_coverage=1 00:34:18.973 --rc genhtml_function_coverage=1 00:34:18.973 --rc genhtml_legend=1 00:34:18.973 --rc geninfo_all_blocks=1 00:34:18.973 --rc geninfo_unexecuted_blocks=1 00:34:18.973 00:34:18.973 ' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.973 --rc genhtml_branch_coverage=1 00:34:18.973 --rc genhtml_function_coverage=1 00:34:18.973 --rc genhtml_legend=1 00:34:18.973 --rc geninfo_all_blocks=1 00:34:18.973 --rc geninfo_unexecuted_blocks=1 00:34:18.973 00:34:18.973 ' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:18.973 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:18.974 Cannot find device "nvmf_init_br" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:18.974 Cannot find device "nvmf_init_br2" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:18.974 Cannot find device "nvmf_tgt_br" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.974 Cannot find device "nvmf_tgt_br2" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:18.974 Cannot find device "nvmf_init_br" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:18.974 Cannot find device "nvmf_init_br2" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:18.974 Cannot find device "nvmf_tgt_br" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:18.974 Cannot find device "nvmf_tgt_br2" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:18.974 Cannot find device "nvmf_br" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:18.974 Cannot find device "nvmf_init_if" 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:34:18.974 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:19.233 Cannot find device "nvmf_init_if2" 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:19.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:19.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:19.233 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:19.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:19.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:34:19.234 00:34:19.234 --- 10.0.0.3 ping statistics --- 00:34:19.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.234 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:19.234 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:19.234 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:34:19.234 00:34:19.234 --- 10.0.0.4 ping statistics --- 00:34:19.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.234 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:19.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:34:19.234 00:34:19.234 --- 10.0.0.1 ping statistics --- 00:34:19.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.234 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:19.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:34:19.234 00:34:19.234 --- 10.0.0.2 ping statistics --- 00:34:19.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.234 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=124347 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 124347 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 124347 ']' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:19.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:19.234 18:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:19.493 [2024-12-14 18:54:35.047063] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.493 [2024-12-14 18:54:35.048313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:19.493 [2024-12-14 18:54:35.048385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.493 [2024-12-14 18:54:35.195193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.752 [2024-12-14 18:54:35.282087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.752 [2024-12-14 18:54:35.282177] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.752 [2024-12-14 18:54:35.282192] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.752 [2024-12-14 18:54:35.282203] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.752 [2024-12-14 18:54:35.282213] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.752 [2024-12-14 18:54:35.282257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.752 [2024-12-14 18:54:35.415681] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.752 [2024-12-14 18:54:35.416100] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 [2024-12-14 18:54:36.155130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 Malloc0 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.688 [2024-12-14 18:54:36.239279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=124397 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 124397 /var/tmp/bdevperf.sock 00:34:20.688 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 124397 ']' 00:34:20.689 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:20.689 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:20.689 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:20.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:20.689 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:20.689 18:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:20.689 [2024-12-14 18:54:36.301400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:20.689 [2024-12-14 18:54:36.301509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124397 ] 00:34:20.689 [2024-12-14 18:54:36.438876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.948 [2024-12-14 18:54:36.529111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:21.884 NVMe0n1 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.884 18:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:21.884 Running I/O for 10 seconds... 00:34:23.755 9893.00 IOPS, 38.64 MiB/s [2024-12-14T18:54:40.885Z] 10241.50 IOPS, 40.01 MiB/s [2024-12-14T18:54:41.452Z] 10263.67 IOPS, 40.09 MiB/s [2024-12-14T18:54:42.833Z] 10489.25 IOPS, 40.97 MiB/s [2024-12-14T18:54:43.814Z] 10539.40 IOPS, 41.17 MiB/s [2024-12-14T18:54:44.749Z] 10598.17 IOPS, 41.40 MiB/s [2024-12-14T18:54:45.684Z] 10694.00 IOPS, 41.77 MiB/s [2024-12-14T18:54:46.618Z] 10765.75 IOPS, 42.05 MiB/s [2024-12-14T18:54:47.554Z] 10829.56 IOPS, 42.30 MiB/s [2024-12-14T18:54:47.554Z] 10896.40 IOPS, 42.56 MiB/s 00:34:31.801 Latency(us) 00:34:31.801 [2024-12-14T18:54:47.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:31.801 Verification LBA range: start 0x0 length 0x4000 00:34:31.801 NVMe0n1 : 10.06 10931.80 42.70 0.00 0.00 93321.48 15073.28 62437.93 00:34:31.801 [2024-12-14T18:54:47.554Z] =================================================================================================================== 00:34:31.801 [2024-12-14T18:54:47.554Z] Total : 10931.80 42.70 0.00 0.00 93321.48 15073.28 62437.93 00:34:31.801 { 00:34:31.801 "results": [ 00:34:31.801 { 00:34:31.801 "job": "NVMe0n1", 00:34:31.801 "core_mask": "0x1", 00:34:31.801 "workload": "verify", 00:34:31.801 "status": "finished", 00:34:31.801 "verify_range": { 00:34:31.801 "start": 0, 00:34:31.801 "length": 16384 00:34:31.801 }, 00:34:31.801 "queue_depth": 1024, 00:34:31.801 "io_size": 4096, 00:34:31.801 "runtime": 10.056719, 00:34:31.801 "iops": 10931.795946570646, 00:34:31.801 "mibps": 42.70232791629159, 00:34:31.801 "io_failed": 0, 00:34:31.801 "io_timeout": 0, 00:34:31.801 "avg_latency_us": 93321.48420898391, 00:34:31.801 "min_latency_us": 15073.28, 00:34:31.801 "max_latency_us": 62437.93454545455 00:34:31.801 } 00:34:31.801 ], 00:34:31.801 "core_count": 1 00:34:31.801 } 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 124397 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 124397 ']' 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 124397 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:31.801 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124397 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:32.060 killing process with pid 124397 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124397' 00:34:32.060 Received shutdown signal, test time was about 10.000000 seconds 00:34:32.060 00:34:32.060 Latency(us) 00:34:32.060 [2024-12-14T18:54:47.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.060 [2024-12-14T18:54:47.813Z] =================================================================================================================== 00:34:32.060 [2024-12-14T18:54:47.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 124397 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 124397 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.060 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.060 rmmod nvme_tcp 00:34:32.318 rmmod nvme_fabrics 00:34:32.318 rmmod nvme_keyring 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 124347 ']' 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 124347 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 124347 ']' 00:34:32.318 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 124347 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124347 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:32.319 killing process with pid 124347 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124347' 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 124347 00:34:32.319 18:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 124347 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:32.577 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:34:32.836 00:34:32.836 real 0m14.017s 00:34:32.836 user 0m22.001s 00:34:32.836 sys 0m3.007s 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:32.836 ************************************ 00:34:32.836 END TEST nvmf_queue_depth 00:34:32.836 ************************************ 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.836 ************************************ 00:34:32.836 START TEST nvmf_target_multipath 00:34:32.836 ************************************ 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:32.836 * Looking for test storage... 00:34:32.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:34:32.836 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:33.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.096 --rc genhtml_branch_coverage=1 00:34:33.096 --rc genhtml_function_coverage=1 00:34:33.096 --rc genhtml_legend=1 00:34:33.096 --rc geninfo_all_blocks=1 00:34:33.096 --rc geninfo_unexecuted_blocks=1 00:34:33.096 00:34:33.096 ' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:33.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.096 --rc genhtml_branch_coverage=1 00:34:33.096 --rc genhtml_function_coverage=1 00:34:33.096 --rc genhtml_legend=1 00:34:33.096 --rc geninfo_all_blocks=1 00:34:33.096 --rc geninfo_unexecuted_blocks=1 00:34:33.096 00:34:33.096 ' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:33.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.096 --rc genhtml_branch_coverage=1 00:34:33.096 --rc genhtml_function_coverage=1 00:34:33.096 --rc genhtml_legend=1 00:34:33.096 --rc geninfo_all_blocks=1 00:34:33.096 --rc geninfo_unexecuted_blocks=1 00:34:33.096 00:34:33.096 ' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:33.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.096 --rc genhtml_branch_coverage=1 00:34:33.096 --rc genhtml_function_coverage=1 00:34:33.096 --rc genhtml_legend=1 00:34:33.096 --rc geninfo_all_blocks=1 00:34:33.096 --rc geninfo_unexecuted_blocks=1 00:34:33.096 00:34:33.096 ' 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.096 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:33.097 Cannot find device "nvmf_init_br" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:33.097 Cannot find device "nvmf_init_br2" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:33.097 Cannot find device "nvmf_tgt_br" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:33.097 Cannot find device "nvmf_tgt_br2" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:33.097 Cannot find device "nvmf_init_br" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:33.097 Cannot find device "nvmf_init_br2" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:33.097 Cannot find device "nvmf_tgt_br" 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:34:33.097 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:33.098 Cannot find device "nvmf_tgt_br2" 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:33.098 Cannot find device "nvmf_br" 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:33.098 Cannot find device "nvmf_init_if" 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:33.098 Cannot find device "nvmf_init_if2" 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:33.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:33.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:33.098 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:33.357 18:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:33.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:33.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:34:33.357 00:34:33.357 --- 10.0.0.3 ping statistics --- 00:34:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.357 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:33.357 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:33.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:34:33.357 00:34:33.357 --- 10.0.0.4 ping statistics --- 00:34:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.357 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:33.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:34:33.357 00:34:33.357 --- 10.0.0.1 ping statistics --- 00:34:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.357 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:33.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:34:33.357 00:34:33.357 --- 10.0.0.2 ping statistics --- 00:34:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.357 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=124785 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 124785 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 124785 ']' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:33.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:33.357 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:33.616 [2024-12-14 18:54:49.160715] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:33.616 [2024-12-14 18:54:49.162049] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:33.616 [2024-12-14 18:54:49.162120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.616 [2024-12-14 18:54:49.301590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.616 [2024-12-14 18:54:49.366882] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.616 [2024-12-14 18:54:49.366953] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.616 [2024-12-14 18:54:49.366965] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.616 [2024-12-14 18:54:49.366973] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.616 [2024-12-14 18:54:49.366979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.616 [2024-12-14 18:54:49.367175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.616 [2024-12-14 18:54:49.367332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:33.875 [2024-12-14 18:54:49.367973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:33.875 [2024-12-14 18:54:49.368035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.875 [2024-12-14 18:54:49.463440] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:33.875 [2024-12-14 18:54:49.463952] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:33.875 [2024-12-14 18:54:49.464186] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:33.875 [2024-12-14 18:54:49.464263] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:33.875 [2024-12-14 18:54:49.464495] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.875 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:34.134 [2024-12-14 18:54:49.833005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.134 18:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:34.393 Malloc0 00:34:34.393 18:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:34:34.961 18:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.961 18:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:35.219 [2024-12-14 18:54:50.921048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:35.219 18:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:34:35.786 [2024-12-14 18:54:51.236994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:35.786 18:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:34:38.318 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=124905 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:34:38.319 18:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:34:38.319 [global] 00:34:38.319 thread=1 00:34:38.319 invalidate=1 00:34:38.319 rw=randrw 00:34:38.319 time_based=1 00:34:38.319 runtime=6 00:34:38.319 ioengine=libaio 00:34:38.319 direct=1 00:34:38.319 bs=4096 00:34:38.319 iodepth=128 00:34:38.319 norandommap=0 00:34:38.319 numjobs=1 00:34:38.319 00:34:38.319 verify_dump=1 00:34:38.319 verify_backlog=512 00:34:38.319 verify_state_save=0 00:34:38.319 do_verify=1 00:34:38.319 verify=crc32c-intel 00:34:38.319 [job0] 00:34:38.319 filename=/dev/nvme0n1 00:34:38.319 Could not set queue depth (nvme0n1) 00:34:38.319 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:38.319 fio-3.35 00:34:38.319 Starting 1 thread 00:34:38.887 18:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:34:39.145 18:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:39.713 18:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:40.648 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:40.648 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:40.648 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:40.648 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:34:40.907 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:41.166 18:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:42.102 18:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:42.102 18:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:42.102 18:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:42.102 18:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 124905 00:34:44.632 00:34:44.632 job0: (groupid=0, jobs=1): err= 0: pid=124926: Sat Dec 14 18:54:59 2024 00:34:44.632 read: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(280MiB/6006msec) 00:34:44.632 slat (usec): min=5, max=6404, avg=46.98, stdev=204.64 00:34:44.632 clat (usec): min=2056, max=53909, avg=7140.99, stdev=1772.77 00:34:44.632 lat (usec): min=2105, max=53917, avg=7187.97, stdev=1780.19 00:34:44.632 clat percentiles (usec): 00:34:44.632 | 1.00th=[ 4490], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6194], 00:34:44.632 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:34:44.632 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9110], 00:34:44.632 | 99.00th=[10683], 99.50th=[11207], 99.90th=[16581], 99.95th=[50070], 00:34:44.632 | 99.99th=[53216] 00:34:44.632 bw ( KiB/s): min=15056, max=34384, per=53.54%, avg=25551.27, stdev=6434.15, samples=11 00:34:44.632 iops : min= 3764, max= 8596, avg=6387.82, stdev=1608.54, samples=11 00:34:44.632 write: IOPS=7122, BW=27.8MiB/s (29.2MB/s)(150MiB/5391msec); 0 zone resets 00:34:44.632 slat (usec): min=11, max=2350, avg=58.17, stdev=124.91 00:34:44.632 clat (usec): min=1102, max=53222, avg=6512.45, stdev=1970.77 00:34:44.632 lat (usec): min=1154, max=53241, avg=6570.62, stdev=1972.72 00:34:44.632 clat percentiles (usec): 00:34:44.632 | 1.00th=[ 3589], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 5866], 00:34:44.632 | 30.00th=[ 6128], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6652], 00:34:44.632 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7701], 00:34:44.632 | 99.00th=[ 9634], 99.50th=[10421], 99.90th=[49021], 99.95th=[50070], 00:34:44.632 | 99.99th=[52691] 00:34:44.632 bw ( KiB/s): min=15288, max=33864, per=89.66%, avg=25546.91, stdev=6114.93, samples=11 00:34:44.632 iops : min= 3822, max= 8466, avg=6386.73, stdev=1528.73, samples=11 00:34:44.632 lat (msec) : 2=0.01%, 4=1.06%, 10=97.18%, 20=1.64%, 50=0.07% 00:34:44.632 lat (msec) : 100=0.05% 00:34:44.632 cpu : usr=5.98%, sys=24.15%, ctx=8161, majf=0, minf=102 00:34:44.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:34:44.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.632 issued rwts: total=71656,38400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.632 00:34:44.632 Run status group 0 (all jobs): 00:34:44.632 READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=280MiB (294MB), run=6006-6006msec 00:34:44.632 WRITE: bw=27.8MiB/s (29.2MB/s), 27.8MiB/s-27.8MiB/s (29.2MB/s-29.2MB/s), io=150MiB (157MB), run=5391-5391msec 00:34:44.632 00:34:44.632 Disk stats (read/write): 00:34:44.632 nvme0n1: ios=70547/37888, merge=0/0, ticks=465846/235978, in_queue=701824, util=98.72% 00:34:44.632 18:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:34:44.633 18:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:46.007 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:46.007 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:46.007 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:34:46.008 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:34:46.008 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=125051 00:34:46.008 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:34:46.008 18:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:34:46.008 [global] 00:34:46.008 thread=1 00:34:46.008 invalidate=1 00:34:46.008 rw=randrw 00:34:46.008 time_based=1 00:34:46.008 runtime=6 00:34:46.008 ioengine=libaio 00:34:46.008 direct=1 00:34:46.008 bs=4096 00:34:46.008 iodepth=128 00:34:46.008 norandommap=0 00:34:46.008 numjobs=1 00:34:46.008 00:34:46.008 verify_dump=1 00:34:46.008 verify_backlog=512 00:34:46.008 verify_state_save=0 00:34:46.008 do_verify=1 00:34:46.008 verify=crc32c-intel 00:34:46.008 [job0] 00:34:46.008 filename=/dev/nvme0n1 00:34:46.008 Could not set queue depth (nvme0n1) 00:34:46.008 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.008 fio-3.35 00:34:46.008 Starting 1 thread 00:34:46.943 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:34:46.943 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:34:47.509 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:34:47.509 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:47.510 18:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:48.444 18:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:48.444 18:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:48.444 18:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:48.444 18:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:34:48.703 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:48.961 18:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:49.896 18:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:49.896 18:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:49.896 18:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:49.896 18:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 125051 00:34:52.425 00:34:52.425 job0: (groupid=0, jobs=1): err= 0: pid=125075: Sat Dec 14 18:55:07 2024 00:34:52.425 read: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(307MiB/6004msec) 00:34:52.425 slat (usec): min=2, max=5426, avg=37.86, stdev=180.08 00:34:52.425 clat (usec): min=299, max=18091, avg=6667.91, stdev=1881.96 00:34:52.425 lat (usec): min=316, max=18098, avg=6705.77, stdev=1887.97 00:34:52.425 clat percentiles (usec): 00:34:52.425 | 1.00th=[ 2147], 5.00th=[ 3490], 10.00th=[ 4555], 20.00th=[ 5669], 00:34:52.425 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6849], 00:34:52.425 | 70.00th=[ 7177], 80.00th=[ 7570], 90.00th=[ 8717], 95.00th=[10028], 00:34:52.425 | 99.00th=[12649], 99.50th=[13960], 99.90th=[16319], 99.95th=[16909], 00:34:52.425 | 99.99th=[17433] 00:34:52.425 bw ( KiB/s): min=19544, max=33528, per=51.60%, avg=27016.09, stdev=4836.01, samples=11 00:34:52.425 iops : min= 4886, max= 8382, avg=6754.00, stdev=1209.02, samples=11 00:34:52.425 write: IOPS=7416, BW=29.0MiB/s (30.4MB/s)(155MiB/5346msec); 0 zone resets 00:34:52.425 slat (usec): min=11, max=3128, avg=49.36, stdev=101.27 00:34:52.425 clat (usec): min=423, max=15899, avg=5997.26, stdev=1807.54 00:34:52.425 lat (usec): min=463, max=15917, avg=6046.63, stdev=1809.22 00:34:52.425 clat percentiles (usec): 00:34:52.425 | 1.00th=[ 1811], 5.00th=[ 2737], 10.00th=[ 3621], 20.00th=[ 5014], 00:34:52.425 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6063], 60.00th=[ 6259], 00:34:52.425 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7570], 95.00th=[ 9503], 00:34:52.425 | 99.00th=[11469], 99.50th=[12518], 99.90th=[14615], 99.95th=[15008], 00:34:52.425 | 99.99th=[15795] 00:34:52.425 bw ( KiB/s): min=19440, max=33064, per=90.91%, avg=26968.09, stdev=4593.81, samples=11 00:34:52.425 iops : min= 4860, max= 8266, avg=6742.00, stdev=1148.47, samples=11 00:34:52.425 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.07% 00:34:52.425 lat (msec) : 2=0.99%, 4=7.87%, 10=86.26%, 20=4.76% 00:34:52.425 cpu : usr=6.23%, sys=25.90%, ctx=9337, majf=0, minf=90 00:34:52.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:52.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.425 issued rwts: total=78590,39647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.425 00:34:52.425 Run status group 0 (all jobs): 00:34:52.425 READ: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=307MiB (322MB), run=6004-6004msec 00:34:52.425 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=155MiB (162MB), run=5346-5346msec 00:34:52.425 00:34:52.425 Disk stats (read/write): 00:34:52.425 nvme0n1: ios=77483/39053, merge=0/0, ticks=480891/225067, in_queue=705958, util=98.68% 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:52.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:34:52.425 18:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.701 rmmod nvme_tcp 00:34:52.701 rmmod nvme_fabrics 00:34:52.701 rmmod nvme_keyring 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 124785 ']' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 124785 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 124785 ']' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 124785 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124785 00:34:52.701 killing process with pid 124785 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124785' 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 124785 00:34:52.701 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 124785 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:52.973 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:52.974 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:34:53.232 00:34:53.232 real 0m20.320s 00:34:53.232 user 1m11.734s 00:34:53.232 sys 0m8.440s 00:34:53.232 ************************************ 00:34:53.232 END TEST nvmf_target_multipath 00:34:53.232 ************************************ 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.232 ************************************ 00:34:53.232 START TEST nvmf_zcopy 00:34:53.232 ************************************ 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:53.232 * Looking for test storage... 00:34:53.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:34:53.232 18:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.492 --rc genhtml_branch_coverage=1 00:34:53.492 --rc genhtml_function_coverage=1 00:34:53.492 --rc genhtml_legend=1 00:34:53.492 --rc geninfo_all_blocks=1 00:34:53.492 --rc geninfo_unexecuted_blocks=1 00:34:53.492 00:34:53.492 ' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.492 --rc genhtml_branch_coverage=1 00:34:53.492 --rc genhtml_function_coverage=1 00:34:53.492 --rc genhtml_legend=1 00:34:53.492 --rc geninfo_all_blocks=1 00:34:53.492 --rc geninfo_unexecuted_blocks=1 00:34:53.492 00:34:53.492 ' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.492 --rc genhtml_branch_coverage=1 00:34:53.492 --rc genhtml_function_coverage=1 00:34:53.492 --rc genhtml_legend=1 00:34:53.492 --rc geninfo_all_blocks=1 00:34:53.492 --rc geninfo_unexecuted_blocks=1 00:34:53.492 00:34:53.492 ' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.492 --rc genhtml_branch_coverage=1 00:34:53.492 --rc genhtml_function_coverage=1 00:34:53.492 --rc genhtml_legend=1 00:34:53.492 --rc geninfo_all_blocks=1 00:34:53.492 --rc geninfo_unexecuted_blocks=1 00:34:53.492 00:34:53.492 ' 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.492 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:53.493 Cannot find device "nvmf_init_br" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:53.493 Cannot find device "nvmf_init_br2" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:53.493 Cannot find device "nvmf_tgt_br" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:53.493 Cannot find device "nvmf_tgt_br2" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:53.493 Cannot find device "nvmf_init_br" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:53.493 Cannot find device "nvmf_init_br2" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:53.493 Cannot find device "nvmf_tgt_br" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:53.493 Cannot find device "nvmf_tgt_br2" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:53.493 Cannot find device "nvmf_br" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:53.493 Cannot find device "nvmf_init_if" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:53.493 Cannot find device "nvmf_init_if2" 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:53.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:53.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:53.493 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:53.753 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:53.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:53.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:34:53.754 00:34:53.754 --- 10.0.0.3 ping statistics --- 00:34:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.754 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:53.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:53.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:34:53.754 00:34:53.754 --- 10.0.0.4 ping statistics --- 00:34:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.754 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:53.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:34:53.754 00:34:53.754 --- 10.0.0.1 ping statistics --- 00:34:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.754 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:53.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:34:53.754 00:34:53.754 --- 10.0.0.2 ping statistics --- 00:34:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.754 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=125399 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 125399 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 125399 ']' 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.754 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.013 [2024-12-14 18:55:09.541879] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:54.013 [2024-12-14 18:55:09.542772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:54.013 [2024-12-14 18:55:09.543338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.013 [2024-12-14 18:55:09.680029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.273 [2024-12-14 18:55:09.775264] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.273 [2024-12-14 18:55:09.775340] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.273 [2024-12-14 18:55:09.775367] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.273 [2024-12-14 18:55:09.775377] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.273 [2024-12-14 18:55:09.775387] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.273 [2024-12-14 18:55:09.775428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.273 [2024-12-14 18:55:09.897713] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:54.273 [2024-12-14 18:55:09.898128] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.273 [2024-12-14 18:55:09.988369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.273 18:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.273 [2024-12-14 18:55:10.016722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.273 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.532 malloc0 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:54.532 { 00:34:54.532 "params": { 00:34:54.532 "name": "Nvme$subsystem", 00:34:54.532 "trtype": "$TEST_TRANSPORT", 00:34:54.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.532 "adrfam": "ipv4", 00:34:54.532 "trsvcid": "$NVMF_PORT", 00:34:54.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.532 "hdgst": ${hdgst:-false}, 00:34:54.532 "ddgst": ${ddgst:-false} 00:34:54.532 }, 00:34:54.532 "method": "bdev_nvme_attach_controller" 00:34:54.532 } 00:34:54.532 EOF 00:34:54.532 )") 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:34:54.532 18:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:54.532 "params": { 00:34:54.532 "name": "Nvme1", 00:34:54.532 "trtype": "tcp", 00:34:54.532 "traddr": "10.0.0.3", 00:34:54.532 "adrfam": "ipv4", 00:34:54.532 "trsvcid": "4420", 00:34:54.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:54.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:54.532 "hdgst": false, 00:34:54.532 "ddgst": false 00:34:54.532 }, 00:34:54.532 "method": "bdev_nvme_attach_controller" 00:34:54.532 }' 00:34:54.532 [2024-12-14 18:55:10.147786] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:54.532 [2024-12-14 18:55:10.147889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125436 ] 00:34:54.791 [2024-12-14 18:55:10.291030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.791 [2024-12-14 18:55:10.364509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.050 Running I/O for 10 seconds... 00:34:56.925 6050.00 IOPS, 47.27 MiB/s [2024-12-14T18:55:13.614Z] 6107.00 IOPS, 47.71 MiB/s [2024-12-14T18:55:14.991Z] 6113.00 IOPS, 47.76 MiB/s [2024-12-14T18:55:15.564Z] 6126.25 IOPS, 47.86 MiB/s [2024-12-14T18:55:16.941Z] 6149.40 IOPS, 48.04 MiB/s [2024-12-14T18:55:17.877Z] 6171.50 IOPS, 48.21 MiB/s [2024-12-14T18:55:18.813Z] 6181.71 IOPS, 48.29 MiB/s [2024-12-14T18:55:19.749Z] 6187.25 IOPS, 48.34 MiB/s [2024-12-14T18:55:20.685Z] 6185.78 IOPS, 48.33 MiB/s [2024-12-14T18:55:20.685Z] 6179.30 IOPS, 48.28 MiB/s 00:35:04.932 Latency(us) 00:35:04.932 [2024-12-14T18:55:20.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.932 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:04.932 Verification LBA range: start 0x0 length 0x1000 00:35:04.932 Nvme1n1 : 10.02 6182.20 48.30 0.00 0.00 20644.85 1832.03 31218.97 00:35:04.932 [2024-12-14T18:55:20.685Z] =================================================================================================================== 00:35:04.932 [2024-12-14T18:55:20.685Z] Total : 6182.20 48.30 0.00 0.00 20644.85 1832.03 31218.97 00:35:05.191 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=125548 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:05.192 { 00:35:05.192 "params": { 00:35:05.192 "name": "Nvme$subsystem", 00:35:05.192 "trtype": "$TEST_TRANSPORT", 00:35:05.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.192 "adrfam": "ipv4", 00:35:05.192 "trsvcid": "$NVMF_PORT", 00:35:05.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.192 "hdgst": ${hdgst:-false}, 00:35:05.192 "ddgst": ${ddgst:-false} 00:35:05.192 }, 00:35:05.192 "method": "bdev_nvme_attach_controller" 00:35:05.192 } 00:35:05.192 EOF 00:35:05.192 )") 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:35:05.192 [2024-12-14 18:55:20.780083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.780138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:35:05.192 18:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:05.192 "params": { 00:35:05.192 "name": "Nvme1", 00:35:05.192 "trtype": "tcp", 00:35:05.192 "traddr": "10.0.0.3", 00:35:05.192 "adrfam": "ipv4", 00:35:05.192 "trsvcid": "4420", 00:35:05.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.192 "hdgst": false, 00:35:05.192 "ddgst": false 00:35:05.192 }, 00:35:05.192 "method": "bdev_nvme_attach_controller" 00:35:05.192 }' 00:35:05.192 [2024-12-14 18:55:20.792064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.792096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.804054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.804083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.816043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.816071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.828035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.828063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.839366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:05.192 [2024-12-14 18:55:20.839466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125548 ] 00:35:05.192 [2024-12-14 18:55:20.840058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.840082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.852035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.852063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.864032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.864060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.876032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.876059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.888033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.888061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.900033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.900061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.912033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.912061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.924033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.924062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.192 [2024-12-14 18:55:20.936035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.192 [2024-12-14 18:55:20.936063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.192 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:20.948035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:20.948063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:20.960039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:20.960071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:20.972036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:20.972064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:20.979039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.452 [2024-12-14 18:55:20.984035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:20.984064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:20.996038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:20.996066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.008034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.008227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.020040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.020071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.032034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.032063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.041034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.452 [2024-12-14 18:55:21.044066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.044095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.056035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.056064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.068035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.068063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.080036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.080064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.092042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.092070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.452 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.452 [2024-12-14 18:55:21.104036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.452 [2024-12-14 18:55:21.104063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.116036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.116063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.128034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.128062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.140034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.140062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.152049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.152082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.164048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.164079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.176046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.176078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.188107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.188292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.453 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.453 [2024-12-14 18:55:21.200048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.453 [2024-12-14 18:55:21.200082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 Running I/O for 5 seconds... 00:35:05.712 [2024-12-14 18:55:21.212046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.212079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.229068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.229105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.244842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.244879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.264068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.264104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.274484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.274519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.289025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.289062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.308228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.308262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.320414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.320450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.339715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.339750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.352741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.352775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.370401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.370437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.385371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.385405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.402772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.402808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.416510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.416547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.435870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.435906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.712 [2024-12-14 18:55:21.454973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.712 [2024-12-14 18:55:21.455008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.712 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.467709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.467745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.487522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.487558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.498201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.498356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.512963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.513009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.530565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.530736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.544427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.544463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.563703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.563738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.971 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.971 [2024-12-14 18:55:21.578110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.971 [2024-12-14 18:55:21.578261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.593371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.593522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.608946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.608982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.627826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.627862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.638420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.638457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.653198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.653234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.671723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.671758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.682510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.682546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.697098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.697134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:05.972 [2024-12-14 18:55:21.714929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.972 [2024-12-14 18:55:21.714965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.972 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.727839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.727874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.748339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.748375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.767971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.768005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.780326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.780360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.799882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.799918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.811728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.811764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.831242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.831278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.844836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.844871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.863456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.863491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.875006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.875041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.888177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.888211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.908513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.908549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.928753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.928789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.943848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.943882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.964269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.964305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.231 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.231 [2024-12-14 18:55:21.980485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.231 [2024-12-14 18:55:21.980520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:21.999939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:21.999981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.011744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.011779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.031842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.031877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.051557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.051593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.064566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.064614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.083024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.083058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.094917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.094953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.108584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.490 [2024-12-14 18:55:22.108635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.490 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.490 [2024-12-14 18:55:22.127787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.127822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.147186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.147244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.160245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.160280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.179697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.179730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.191860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.191895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.212657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.212693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 12074.00 IOPS, 94.33 MiB/s [2024-12-14T18:55:22.244Z] 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.227703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.227737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.491 [2024-12-14 18:55:22.236777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.491 [2024-12-14 18:55:22.236812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.491 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.252877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.252913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.270623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.270657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.285777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.285812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.301209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.301364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.318729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.318763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.334141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.334176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.349373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.349524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.367244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.367279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.381423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.381574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.399335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.399371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.410986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.411021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.425202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.425360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.444246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.444283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.454812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.454846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.750 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.750 [2024-12-14 18:55:22.469689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.750 [2024-12-14 18:55:22.469725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.751 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.751 [2024-12-14 18:55:22.486676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.751 [2024-12-14 18:55:22.486711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.751 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:06.751 [2024-12-14 18:55:22.499237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.751 [2024-12-14 18:55:22.499272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.508854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.508903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.524364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.524515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.542932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.542967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.555466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.555502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.564929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.564964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.581129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.581163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.010 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.010 [2024-12-14 18:55:22.599636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.010 [2024-12-14 18:55:22.599670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.614656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.614690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.627098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.627256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.640467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.640502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.657953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.658103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.673839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.673874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.691536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.691572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.704382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.704417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.723874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.723909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.736103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.736137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.745491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.011 [2024-12-14 18:55:22.745665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.011 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.011 [2024-12-14 18:55:22.761019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.761167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.776434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.776627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.796180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.796329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.807091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.807121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.821247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.821277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.838131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.838160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.853530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.853560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.871560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.871589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.882054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.882085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.896074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.896103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.905205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.905235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.921050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.921080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.939787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.939816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.951556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.951585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.963689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.963718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.973382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.973411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:22.988838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:22.988868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:23.007735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:23.007764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.271 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.271 [2024-12-14 18:55:23.020743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.271 [2024-12-14 18:55:23.020771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.531 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.531 [2024-12-14 18:55:23.039613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.531 [2024-12-14 18:55:23.039642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.531 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.531 [2024-12-14 18:55:23.050207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.531 [2024-12-14 18:55:23.050238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.531 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.531 [2024-12-14 18:55:23.064438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.531 [2024-12-14 18:55:23.064468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.531 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.531 [2024-12-14 18:55:23.083580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.531 [2024-12-14 18:55:23.083623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.531 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.097052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.097082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.115097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.115126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.127114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.127143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.139479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.139510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.150683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.150715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.165102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.165132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.182770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.182799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.194913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.194944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.209646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.209675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 12102.50 IOPS, 94.55 MiB/s [2024-12-14T18:55:23.285Z] [2024-12-14 18:55:23.228014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.228044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.246664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.246694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.259567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.259620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.532 [2024-12-14 18:55:23.279355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.532 [2024-12-14 18:55:23.279386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.532 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.293919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.293949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.311111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.311141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.322090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.322121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.337615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.337643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.354392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.354423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.367312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.367344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.380060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.380089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.389303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.389333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.404507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.404537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.423714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.423744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.437559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.437590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.455777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.455809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.475284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.475315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.490565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.490613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.505143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.505173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.522263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.522294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:07.792 [2024-12-14 18:55:23.537715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.792 [2024-12-14 18:55:23.537744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.792 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.555337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.555368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.567069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.567099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.581688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.581718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.599735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.599763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.618912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.618942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.631384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.631414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.644826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.644855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.664028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.664059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.682755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.682785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.697447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.697480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.715618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.715646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.726305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.726334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.740161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.740191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.749590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.052 [2024-12-14 18:55:23.749631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.052 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.052 [2024-12-14 18:55:23.765046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.053 [2024-12-14 18:55:23.765078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.053 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.053 [2024-12-14 18:55:23.780433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.053 [2024-12-14 18:55:23.780462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.053 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.053 [2024-12-14 18:55:23.799955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.053 [2024-12-14 18:55:23.799983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.053 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.312 [2024-12-14 18:55:23.818425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.312 [2024-12-14 18:55:23.818460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.312 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.312 [2024-12-14 18:55:23.832376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.312 [2024-12-14 18:55:23.832411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.312 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.312 [2024-12-14 18:55:23.851649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.312 [2024-12-14 18:55:23.851684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.312 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.312 [2024-12-14 18:55:23.865345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.312 [2024-12-14 18:55:23.865381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.312 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.312 [2024-12-14 18:55:23.882446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.312 [2024-12-14 18:55:23.882482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.312 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.894560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.894613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.909640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.909676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.926181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.926216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.941515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.941551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.958706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.958740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.973935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.973970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:23.989312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:23.989347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:24.007332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:24.007510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:24.019690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:24.019725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:24.040009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:24.040044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.313 [2024-12-14 18:55:24.054275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.313 [2024-12-14 18:55:24.054310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.313 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.572 [2024-12-14 18:55:24.069976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.070012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.084674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.084708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.104392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.104427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.120360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.120396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.139048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.139200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.153618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.153651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.170622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.170655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.186034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.186183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.201160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.201307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 12121.67 IOPS, 94.70 MiB/s [2024-12-14T18:55:24.326Z] [2024-12-14 18:55:24.219535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.219571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.232992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.233028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.251223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.251258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.270871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.270905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.285996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.286032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.301731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.301766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.573 [2024-12-14 18:55:24.318289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.573 [2024-12-14 18:55:24.318324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.573 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.333560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.333610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.351464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.351499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.365060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.365095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.383900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.383937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.394625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.394658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.408992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.409028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.426107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.426258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.441692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.441727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.458903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.458938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.473762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.473797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.489245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.489397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.508389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.508424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.526935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.526971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.539677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.539713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.551957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.551992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.561373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.561529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:08.833 [2024-12-14 18:55:24.575912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.833 [2024-12-14 18:55:24.575952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.833 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.595847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.595893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.614069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.614103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.629572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.629637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.646968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.647003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.661474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.661520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.676842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.676876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.695883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.695921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.707905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.707940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.717182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.717216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.731965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.732007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.092 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.092 [2024-12-14 18:55:24.751913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.092 [2024-12-14 18:55:24.751959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.763754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.763788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.773352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.773386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.788421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.788464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.807669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.807703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.818710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.818745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.093 [2024-12-14 18:55:24.833402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.093 [2024-12-14 18:55:24.833444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.093 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.848664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.848700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.867585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.867630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.887169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.887214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.902224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.902262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.917282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.917317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.935330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.935366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.947266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.947302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.960422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.960579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.978667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.978702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:24.991065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:24.991101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.004487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.004660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.024258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.024409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.043777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.043813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.054360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.054397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.069340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.069377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.088146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.088182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.352 [2024-12-14 18:55:25.098817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.352 [2024-12-14 18:55:25.098866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.352 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.114425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.114579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.126656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.126685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.140771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.140801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.159733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.159763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.173221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.173252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.191582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.191629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.203445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.203476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.212674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.212721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 12119.25 IOPS, 94.68 MiB/s [2024-12-14T18:55:25.365Z] 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.228625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.228654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.248263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.248293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.263120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.263149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.273151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.273181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.289048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.289078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.307463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.307494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.321810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.321842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.337053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.337084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.612 [2024-12-14 18:55:25.355692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.612 [2024-12-14 18:55:25.355722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.612 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.375417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.375448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.388846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.388876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.406677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.406707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.421899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.421930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.437532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.437562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.453859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.453889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.469550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.469580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.487939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.487975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.507560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.507615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.526849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.526894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.871 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.871 [2024-12-14 18:55:25.540646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.871 [2024-12-14 18:55:25.540676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.872 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.872 [2024-12-14 18:55:25.559776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.872 [2024-12-14 18:55:25.559806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.872 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.872 [2024-12-14 18:55:25.578280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.872 [2024-12-14 18:55:25.578311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.872 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.872 [2024-12-14 18:55:25.592498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.872 [2024-12-14 18:55:25.592529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.872 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:09.872 [2024-12-14 18:55:25.610728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.872 [2024-12-14 18:55:25.610758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.872 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.625877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.625920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.643117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.643148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.655163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.655193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.669131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.669160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.688168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.688198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.700764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.700794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.719840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.719871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.739541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.739584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.753133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.753162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.770987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.771017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.784700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.784731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.803332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.803362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.817050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.817081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.835070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.835101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.849270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.849301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.866863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.866894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.131 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.131 [2024-12-14 18:55:25.880520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.131 [2024-12-14 18:55:25.880556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.899000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.899035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.911171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.911214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.924478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.924514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.944479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.944517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.962994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.963038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.977200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.977236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:25.995175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:25.995231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.007381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.007418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.016897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.016933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.032487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.032523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.053114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.053158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.068257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.068303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.078362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.078396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.092352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.092386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.112031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.112071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.122711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.122743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.391 [2024-12-14 18:55:26.137780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.391 [2024-12-14 18:55:26.137813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.391 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.651 [2024-12-14 18:55:26.153182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.651 [2024-12-14 18:55:26.153225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.651 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.651 [2024-12-14 18:55:26.171634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.651 [2024-12-14 18:55:26.171663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.651 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.651 [2024-12-14 18:55:26.183389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.651 [2024-12-14 18:55:26.183425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.651 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.651 [2024-12-14 18:55:26.194873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.651 [2024-12-14 18:55:26.194906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.651 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.651 [2024-12-14 18:55:26.208776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.651 [2024-12-14 18:55:26.208809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 12148.00 IOPS, 94.91 MiB/s [2024-12-14T18:55:26.405Z] [2024-12-14 18:55:26.224507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.224541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 00:35:10.652 Latency(us) 00:35:10.652 [2024-12-14T18:55:26.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.652 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:10.652 Nvme1n1 : 5.01 12148.99 94.91 0.00 0.00 10521.67 2338.44 17873.45 00:35:10.652 [2024-12-14T18:55:26.405Z] =================================================================================================================== 00:35:10.652 [2024-12-14T18:55:26.405Z] Total : 12148.99 94.91 0.00 0.00 10521.67 2338.44 17873.45 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.236188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.236222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.248039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.248066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.260033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.260060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.272034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.272061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.284038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.284064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.296034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.296058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.308034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.308060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.320034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.320060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.332033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.332059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.344033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.344060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.356035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.356060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.368038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.368065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.380034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.380058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.652 [2024-12-14 18:55:26.392032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.652 [2024-12-14 18:55:26.392057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.652 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.911 [2024-12-14 18:55:26.404032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.911 [2024-12-14 18:55:26.404070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.911 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.911 [2024-12-14 18:55:26.416042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.911 [2024-12-14 18:55:26.416066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.911 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.911 [2024-12-14 18:55:26.428033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.911 [2024-12-14 18:55:26.428059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.911 2024/12/14 18:55:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:10.911 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (125548) - No such process 00:35:10.911 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 125548 00:35:10.911 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:10.911 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:10.912 delay0 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.912 18:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:35:10.912 [2024-12-14 18:55:26.625084] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:19.030 Initializing NVMe Controllers 00:35:19.030 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:35:19.030 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:19.030 Initialization complete. Launching workers. 00:35:19.030 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 297, failed: 12660 00:35:19.030 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12903, failed to submit 54 00:35:19.030 success 12747, unsuccessful 156, failed 0 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.030 rmmod nvme_tcp 00:35:19.030 rmmod nvme_fabrics 00:35:19.030 rmmod nvme_keyring 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 125399 ']' 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 125399 00:35:19.030 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 125399 ']' 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 125399 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125399 00:35:19.031 killing process with pid 125399 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125399' 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 125399 00:35:19.031 18:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 125399 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:35:19.031 ************************************ 00:35:19.031 END TEST nvmf_zcopy 00:35:19.031 ************************************ 00:35:19.031 00:35:19.031 real 0m25.439s 00:35:19.031 user 0m37.113s 00:35:19.031 sys 0m9.840s 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.031 ************************************ 00:35:19.031 START TEST nvmf_nmic 00:35:19.031 ************************************ 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:19.031 * Looking for test storage... 00:35:19.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.031 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:19.032 Cannot find device "nvmf_init_br" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:19.032 Cannot find device "nvmf_init_br2" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:19.032 Cannot find device "nvmf_tgt_br" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:19.032 Cannot find device "nvmf_tgt_br2" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:19.032 Cannot find device "nvmf_init_br" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:19.032 Cannot find device "nvmf_init_br2" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:19.032 Cannot find device "nvmf_tgt_br" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:19.032 Cannot find device "nvmf_tgt_br2" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:19.032 Cannot find device "nvmf_br" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:19.032 Cannot find device "nvmf_init_if" 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:35:19.032 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:19.032 Cannot find device "nvmf_init_if2" 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:19.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:19.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:19.033 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:19.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:19.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:35:19.292 00:35:19.292 --- 10.0.0.3 ping statistics --- 00:35:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.292 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:19.292 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:19.292 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:35:19.292 00:35:19.292 --- 10.0.0.4 ping statistics --- 00:35:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.292 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:19.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:35:19.292 00:35:19.292 --- 10.0.0.1 ping statistics --- 00:35:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.292 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:19.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:35:19.292 00:35:19.292 --- 10.0.0.2 ping statistics --- 00:35:19.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.292 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:19.292 18:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=125920 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 125920 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 125920 ']' 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.292 18:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.555 [2024-12-14 18:55:35.076332] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:19.555 [2024-12-14 18:55:35.077620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:19.555 [2024-12-14 18:55:35.077686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.555 [2024-12-14 18:55:35.221681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.555 [2024-12-14 18:55:35.298402] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.555 [2024-12-14 18:55:35.298802] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.555 [2024-12-14 18:55:35.298986] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.555 [2024-12-14 18:55:35.299194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.555 [2024-12-14 18:55:35.299411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.555 [2024-12-14 18:55:35.299644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.555 [2024-12-14 18:55:35.299773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.555 [2024-12-14 18:55:35.299693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.555 [2024-12-14 18:55:35.299783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.817 [2024-12-14 18:55:35.414489] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:19.817 [2024-12-14 18:55:35.414914] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:19.817 [2024-12-14 18:55:35.415754] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:19.817 [2024-12-14 18:55:35.415776] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:19.817 [2024-12-14 18:55:35.416295] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:20.416 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.416 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:35:20.416 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:20.416 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.416 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.675 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 [2024-12-14 18:55:36.189057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 Malloc0 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 [2024-12-14 18:55:36.265048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 test case1: single bdev can't be used in multiple subsystems 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 [2024-12-14 18:55:36.288787] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:20.676 [2024-12-14 18:55:36.288837] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:20.676 [2024-12-14 18:55:36.288852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:20.676 2024/12/14 18:55:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:20.676 request: 00:35:20.676 { 00:35:20.676 "method": "nvmf_subsystem_add_ns", 00:35:20.676 "params": { 00:35:20.676 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:20.676 "namespace": { 00:35:20.676 "bdev_name": "Malloc0", 00:35:20.676 "no_auto_visible": false 00:35:20.676 } 00:35:20.676 } 00:35:20.676 } 00:35:20.676 Got JSON-RPC error response 00:35:20.676 GoRPCClient: error on JSON-RPC call 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:20.676 Adding namespace failed - expected result. 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:20.676 test case2: host connect to nvmf target in multiple paths 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:20.676 [2024-12-14 18:55:36.300912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:35:20.676 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:35:20.935 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:20.935 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:35:20.935 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:20.935 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:20.935 18:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:35:22.839 18:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:22.839 [global] 00:35:22.839 thread=1 00:35:22.839 invalidate=1 00:35:22.839 rw=write 00:35:22.839 time_based=1 00:35:22.839 runtime=1 00:35:22.839 ioengine=libaio 00:35:22.839 direct=1 00:35:22.839 bs=4096 00:35:22.839 iodepth=1 00:35:22.839 norandommap=0 00:35:22.839 numjobs=1 00:35:22.839 00:35:22.839 verify_dump=1 00:35:22.839 verify_backlog=512 00:35:22.839 verify_state_save=0 00:35:22.839 do_verify=1 00:35:22.839 verify=crc32c-intel 00:35:22.839 [job0] 00:35:22.839 filename=/dev/nvme0n1 00:35:22.839 Could not set queue depth (nvme0n1) 00:35:23.098 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:23.098 fio-3.35 00:35:23.098 Starting 1 thread 00:35:24.477 00:35:24.477 job0: (groupid=0, jobs=1): err= 0: pid=126024: Sat Dec 14 18:55:39 2024 00:35:24.477 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:35:24.477 slat (nsec): min=11648, max=54709, avg=14601.92, stdev=4776.33 00:35:24.477 clat (usec): min=155, max=879, avg=197.01, stdev=27.73 00:35:24.477 lat (usec): min=167, max=895, avg=211.61, stdev=28.41 00:35:24.477 clat percentiles (usec): 00:35:24.477 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:35:24.477 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:35:24.477 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 243], 00:35:24.477 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 474], 99.95th=[ 478], 00:35:24.477 | 99.99th=[ 881] 00:35:24.477 write: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:35:24.477 slat (usec): min=17, max=116, avg=21.95, stdev= 7.09 00:35:24.477 clat (usec): min=88, max=576, avg=127.20, stdev=23.28 00:35:24.477 lat (usec): min=115, max=610, avg=149.15, stdev=25.56 00:35:24.477 clat percentiles (usec): 00:35:24.477 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 113], 00:35:24.477 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 126], 00:35:24.477 | 70.00th=[ 131], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 167], 00:35:24.477 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 306], 99.95th=[ 570], 00:35:24.477 | 99.99th=[ 578] 00:35:24.477 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:35:24.477 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:35:24.477 lat (usec) : 100=0.27%, 250=98.19%, 500=1.49%, 750=0.04%, 1000=0.02% 00:35:24.477 cpu : usr=1.80%, sys=7.70%, ctx=5576, majf=0, minf=5 00:35:24.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.477 issued rwts: total=2560,3016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.477 00:35:24.477 Run status group 0 (all jobs): 00:35:24.477 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:35:24.477 WRITE: bw=11.8MiB/s (12.3MB/s), 11.8MiB/s-11.8MiB/s (12.3MB/s-12.3MB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:35:24.477 00:35:24.477 Disk stats (read/write): 00:35:24.478 nvme0n1: ios=2447/2560, merge=0/0, ticks=503/358, in_queue=861, util=91.48% 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:24.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.478 rmmod nvme_tcp 00:35:24.478 rmmod nvme_fabrics 00:35:24.478 rmmod nvme_keyring 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 125920 ']' 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 125920 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 125920 ']' 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 125920 00:35:24.478 18:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125920 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:24.478 killing process with pid 125920 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125920' 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 125920 00:35:24.478 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 125920 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.737 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:35:24.996 00:35:24.996 real 0m6.155s 00:35:24.996 user 0m15.266s 00:35:24.996 sys 0m1.925s 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.996 ************************************ 00:35:24.996 END TEST nvmf_nmic 00:35:24.996 ************************************ 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:24.996 ************************************ 00:35:24.996 START TEST nvmf_fio_target 00:35:24.996 ************************************ 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:24.996 * Looking for test storage... 00:35:24.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.996 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.256 --rc genhtml_branch_coverage=1 00:35:25.256 --rc genhtml_function_coverage=1 00:35:25.256 --rc genhtml_legend=1 00:35:25.256 --rc geninfo_all_blocks=1 00:35:25.256 --rc geninfo_unexecuted_blocks=1 00:35:25.256 00:35:25.256 ' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.256 --rc genhtml_branch_coverage=1 00:35:25.256 --rc genhtml_function_coverage=1 00:35:25.256 --rc genhtml_legend=1 00:35:25.256 --rc geninfo_all_blocks=1 00:35:25.256 --rc geninfo_unexecuted_blocks=1 00:35:25.256 00:35:25.256 ' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.256 --rc genhtml_branch_coverage=1 00:35:25.256 --rc genhtml_function_coverage=1 00:35:25.256 --rc genhtml_legend=1 00:35:25.256 --rc geninfo_all_blocks=1 00:35:25.256 --rc geninfo_unexecuted_blocks=1 00:35:25.256 00:35:25.256 ' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.256 --rc genhtml_branch_coverage=1 00:35:25.256 --rc genhtml_function_coverage=1 00:35:25.256 --rc genhtml_legend=1 00:35:25.256 --rc geninfo_all_blocks=1 00:35:25.256 --rc geninfo_unexecuted_blocks=1 00:35:25.256 00:35:25.256 ' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.256 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:25.257 Cannot find device "nvmf_init_br" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:25.257 Cannot find device "nvmf_init_br2" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:25.257 Cannot find device "nvmf_tgt_br" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:25.257 Cannot find device "nvmf_tgt_br2" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:25.257 Cannot find device "nvmf_init_br" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:25.257 Cannot find device "nvmf_init_br2" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:25.257 Cannot find device "nvmf_tgt_br" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:25.257 Cannot find device "nvmf_tgt_br2" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:25.257 Cannot find device "nvmf_br" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:25.257 Cannot find device "nvmf_init_if" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:25.257 Cannot find device "nvmf_init_if2" 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:25.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:25.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:25.257 18:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:25.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:25.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:35:25.516 00:35:25.516 --- 10.0.0.3 ping statistics --- 00:35:25.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.516 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:25.516 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:25.516 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:35:25.516 00:35:25.516 --- 10.0.0.4 ping statistics --- 00:35:25.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.516 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:35:25.516 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:25.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:35:25.517 00:35:25.517 --- 10.0.0.1 ping statistics --- 00:35:25.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.517 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:25.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:35:25.517 00:35:25.517 --- 10.0.0.2 ping statistics --- 00:35:25.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.517 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:25.517 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=126257 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 126257 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 126257 ']' 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.775 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.775 [2024-12-14 18:55:41.343636] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:25.775 [2024-12-14 18:55:41.344931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:25.775 [2024-12-14 18:55:41.345010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.775 [2024-12-14 18:55:41.488748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:26.034 [2024-12-14 18:55:41.560986] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.034 [2024-12-14 18:55:41.561060] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.034 [2024-12-14 18:55:41.561076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.034 [2024-12-14 18:55:41.561087] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.034 [2024-12-14 18:55:41.561097] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.034 [2024-12-14 18:55:41.561192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.034 [2024-12-14 18:55:41.561326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.034 [2024-12-14 18:55:41.561909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.034 [2024-12-14 18:55:41.561968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.034 [2024-12-14 18:55:41.673744] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:26.034 [2024-12-14 18:55:41.674221] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:26.034 [2024-12-14 18:55:41.674819] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:26.034 [2024-12-14 18:55:41.675259] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:26.034 [2024-12-14 18:55:41.675819] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:26.034 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.034 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:35:26.035 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:26.035 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.035 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.035 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.035 18:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:26.602 [2024-12-14 18:55:42.059043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.602 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:26.602 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:26.602 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.170 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:27.170 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.429 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:27.429 18:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.687 18:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:27.687 18:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:27.946 18:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.205 18:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:28.205 18:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.464 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:28.464 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.723 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:28.723 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:28.982 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:29.241 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:29.241 18:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.507 18:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:29.507 18:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:29.767 18:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:30.025 [2024-12-14 18:55:45.611059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:30.025 18:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:30.283 18:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:35:30.542 18:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:35:33.075 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:35:33.076 18:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:33.076 [global] 00:35:33.076 thread=1 00:35:33.076 invalidate=1 00:35:33.076 rw=write 00:35:33.076 time_based=1 00:35:33.076 runtime=1 00:35:33.076 ioengine=libaio 00:35:33.076 direct=1 00:35:33.076 bs=4096 00:35:33.076 iodepth=1 00:35:33.076 norandommap=0 00:35:33.076 numjobs=1 00:35:33.076 00:35:33.076 verify_dump=1 00:35:33.076 verify_backlog=512 00:35:33.076 verify_state_save=0 00:35:33.076 do_verify=1 00:35:33.076 verify=crc32c-intel 00:35:33.076 [job0] 00:35:33.076 filename=/dev/nvme0n1 00:35:33.076 [job1] 00:35:33.076 filename=/dev/nvme0n2 00:35:33.076 [job2] 00:35:33.076 filename=/dev/nvme0n3 00:35:33.076 [job3] 00:35:33.076 filename=/dev/nvme0n4 00:35:33.076 Could not set queue depth (nvme0n1) 00:35:33.076 Could not set queue depth (nvme0n2) 00:35:33.076 Could not set queue depth (nvme0n3) 00:35:33.076 Could not set queue depth (nvme0n4) 00:35:33.076 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.076 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.076 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.076 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.076 fio-3.35 00:35:33.076 Starting 4 threads 00:35:34.012 00:35:34.012 job0: (groupid=0, jobs=1): err= 0: pid=126532: Sat Dec 14 18:55:49 2024 00:35:34.012 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:35:34.012 slat (nsec): min=11775, max=86410, avg=17715.11, stdev=5609.99 00:35:34.012 clat (usec): min=252, max=42537, avg=505.67, stdev=1317.73 00:35:34.012 lat (usec): min=270, max=42562, avg=523.38, stdev=1317.96 00:35:34.012 clat percentiles (usec): 00:35:34.012 | 1.00th=[ 289], 5.00th=[ 338], 10.00th=[ 375], 20.00th=[ 404], 00:35:34.012 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 461], 00:35:34.012 | 70.00th=[ 486], 80.00th=[ 523], 90.00th=[ 594], 95.00th=[ 644], 00:35:34.012 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 783], 99.95th=[42730], 00:35:34.012 | 99.99th=[42730] 00:35:34.012 write: IOPS=1246, BW=4987KiB/s (5107kB/s)(4992KiB/1001msec); 0 zone resets 00:35:34.012 slat (nsec): min=17947, max=92063, avg=29073.13, stdev=8845.90 00:35:34.012 clat (usec): min=150, max=566, avg=338.84, stdev=58.76 00:35:34.013 lat (usec): min=199, max=592, avg=367.92, stdev=57.81 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 208], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 293], 00:35:34.013 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:35:34.013 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 429], 95.00th=[ 453], 00:35:34.013 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 570], 00:35:34.013 | 99.99th=[ 570] 00:35:34.013 bw ( KiB/s): min= 4440, max= 4440, per=19.15%, avg=4440.00, stdev= 0.00, samples=1 00:35:34.013 iops : min= 1110, max= 1110, avg=1110.00, stdev= 0.00, samples=1 00:35:34.013 lat (usec) : 250=1.76%, 500=86.44%, 750=11.58%, 1000=0.18% 00:35:34.013 lat (msec) : 50=0.04% 00:35:34.013 cpu : usr=2.10%, sys=3.30%, ctx=2273, majf=0, minf=7 00:35:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 issued rwts: total=1024,1248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.013 job1: (groupid=0, jobs=1): err= 0: pid=126533: Sat Dec 14 18:55:49 2024 00:35:34.013 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:35:34.013 slat (nsec): min=11756, max=82658, avg=22081.99, stdev=8240.05 00:35:34.013 clat (usec): min=258, max=42589, avg=500.88, stdev=1319.35 00:35:34.013 lat (usec): min=270, max=42602, avg=522.96, stdev=1319.17 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 293], 5.00th=[ 338], 10.00th=[ 371], 20.00th=[ 396], 00:35:34.013 | 30.00th=[ 412], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 461], 00:35:34.013 | 70.00th=[ 482], 80.00th=[ 515], 90.00th=[ 594], 95.00th=[ 635], 00:35:34.013 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 758], 99.95th=[42730], 00:35:34.013 | 99.99th=[42730] 00:35:34.013 write: IOPS=1246, BW=4987KiB/s (5107kB/s)(4992KiB/1001msec); 0 zone resets 00:35:34.013 slat (usec): min=18, max=110, avg=31.39, stdev=10.33 00:35:34.013 clat (usec): min=115, max=533, avg=336.23, stdev=55.14 00:35:34.013 lat (usec): min=156, max=567, avg=367.61, stdev=56.26 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 200], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 293], 00:35:34.013 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:35:34.013 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 437], 00:35:34.013 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 537], 99.95th=[ 537], 00:35:34.013 | 99.99th=[ 537] 00:35:34.013 bw ( KiB/s): min= 4432, max= 4432, per=19.12%, avg=4432.00, stdev= 0.00, samples=1 00:35:34.013 iops : min= 1108, max= 1108, avg=1108.00, stdev= 0.00, samples=1 00:35:34.013 lat (usec) : 250=1.72%, 500=87.41%, 750=10.74%, 1000=0.09% 00:35:34.013 lat (msec) : 50=0.04% 00:35:34.013 cpu : usr=1.30%, sys=4.90%, ctx=2273, majf=0, minf=13 00:35:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 issued rwts: total=1024,1248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.013 job2: (groupid=0, jobs=1): err= 0: pid=126534: Sat Dec 14 18:55:49 2024 00:35:34.013 read: IOPS=1062, BW=4252KiB/s (4354kB/s)(4256KiB/1001msec) 00:35:34.013 slat (nsec): min=17215, max=70235, avg=25870.43, stdev=7454.77 00:35:34.013 clat (usec): min=176, max=700, avg=377.94, stdev=90.47 00:35:34.013 lat (usec): min=210, max=736, avg=403.81, stdev=91.92 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 285], 00:35:34.013 | 30.00th=[ 322], 40.00th=[ 359], 50.00th=[ 388], 60.00th=[ 408], 00:35:34.013 | 70.00th=[ 424], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 519], 00:35:34.013 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 660], 99.95th=[ 701], 00:35:34.013 | 99.99th=[ 701] 00:35:34.013 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:34.013 slat (usec): min=27, max=120, avg=45.15, stdev=12.35 00:35:34.013 clat (usec): min=106, max=2887, avg=320.90, stdev=93.82 00:35:34.013 lat (usec): min=138, max=2928, avg=366.06, stdev=95.00 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 161], 5.00th=[ 215], 10.00th=[ 251], 20.00th=[ 269], 00:35:34.013 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:35:34.013 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 429], 00:35:34.013 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 898], 99.95th=[ 2900], 00:35:34.013 | 99.99th=[ 2900] 00:35:34.013 bw ( KiB/s): min= 6368, max= 6368, per=27.47%, avg=6368.00, stdev= 0.00, samples=1 00:35:34.013 iops : min= 1592, max= 1592, avg=1592.00, stdev= 0.00, samples=1 00:35:34.013 lat (usec) : 250=8.92%, 500=88.08%, 750=2.92%, 1000=0.04% 00:35:34.013 lat (msec) : 4=0.04% 00:35:34.013 cpu : usr=2.30%, sys=6.70%, ctx=2607, majf=0, minf=9 00:35:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 issued rwts: total=1064,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.013 job3: (groupid=0, jobs=1): err= 0: pid=126535: Sat Dec 14 18:55:49 2024 00:35:34.013 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:35:34.013 slat (nsec): min=12011, max=52209, avg=17940.17, stdev=5340.59 00:35:34.013 clat (usec): min=240, max=7590, avg=322.47, stdev=220.12 00:35:34.013 lat (usec): min=254, max=7621, avg=340.41, stdev=220.57 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:35:34.013 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:35:34.013 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 367], 00:35:34.013 | 99.00th=[ 461], 99.50th=[ 594], 99.90th=[ 3392], 99.95th=[ 7570], 00:35:34.013 | 99.99th=[ 7570] 00:35:34.013 write: IOPS=1767, BW=7069KiB/s (7239kB/s)(7076KiB/1001msec); 0 zone resets 00:35:34.013 slat (nsec): min=17466, max=80247, avg=28980.69, stdev=8947.18 00:35:34.013 clat (usec): min=162, max=625, avg=236.96, stdev=33.66 00:35:34.013 lat (usec): min=187, max=649, avg=265.94, stdev=35.73 00:35:34.013 clat percentiles (usec): 00:35:34.013 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:35:34.013 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:35:34.013 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:35:34.013 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 627], 00:35:34.013 | 99.99th=[ 627] 00:35:34.013 bw ( KiB/s): min= 8192, max= 8192, per=35.34%, avg=8192.00, stdev= 0.00, samples=1 00:35:34.013 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:34.013 lat (usec) : 250=37.37%, 500=62.30%, 750=0.12%, 1000=0.06% 00:35:34.013 lat (msec) : 2=0.03%, 4=0.09%, 10=0.03% 00:35:34.013 cpu : usr=1.70%, sys=5.60%, ctx=3305, majf=0, minf=9 00:35:34.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.013 issued rwts: total=1536,1769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.013 00:35:34.013 Run status group 0 (all jobs): 00:35:34.013 READ: bw=18.1MiB/s (19.0MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=18.2MiB (19.0MB), run=1001-1001msec 00:35:34.013 WRITE: bw=22.6MiB/s (23.7MB/s), 4987KiB/s-7069KiB/s (5107kB/s-7239kB/s), io=22.7MiB (23.8MB), run=1001-1001msec 00:35:34.013 00:35:34.013 Disk stats (read/write): 00:35:34.013 nvme0n1: ios=937/1024, merge=0/0, ticks=467/341, in_queue=808, util=88.18% 00:35:34.013 nvme0n2: ios=936/1024, merge=0/0, ticks=492/346, in_queue=838, util=89.47% 00:35:34.013 nvme0n3: ios=1051/1171, merge=0/0, ticks=427/407, in_queue=834, util=89.79% 00:35:34.013 nvme0n4: ios=1361/1536, merge=0/0, ticks=448/389, in_queue=837, util=89.42% 00:35:34.013 18:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:34.013 [global] 00:35:34.013 thread=1 00:35:34.013 invalidate=1 00:35:34.013 rw=randwrite 00:35:34.013 time_based=1 00:35:34.013 runtime=1 00:35:34.013 ioengine=libaio 00:35:34.013 direct=1 00:35:34.013 bs=4096 00:35:34.013 iodepth=1 00:35:34.013 norandommap=0 00:35:34.013 numjobs=1 00:35:34.013 00:35:34.013 verify_dump=1 00:35:34.013 verify_backlog=512 00:35:34.013 verify_state_save=0 00:35:34.013 do_verify=1 00:35:34.013 verify=crc32c-intel 00:35:34.013 [job0] 00:35:34.013 filename=/dev/nvme0n1 00:35:34.013 [job1] 00:35:34.013 filename=/dev/nvme0n2 00:35:34.013 [job2] 00:35:34.013 filename=/dev/nvme0n3 00:35:34.013 [job3] 00:35:34.013 filename=/dev/nvme0n4 00:35:34.013 Could not set queue depth (nvme0n1) 00:35:34.013 Could not set queue depth (nvme0n2) 00:35:34.013 Could not set queue depth (nvme0n3) 00:35:34.013 Could not set queue depth (nvme0n4) 00:35:34.272 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.272 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.272 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.272 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.272 fio-3.35 00:35:34.272 Starting 4 threads 00:35:35.650 00:35:35.650 job0: (groupid=0, jobs=1): err= 0: pid=126588: Sat Dec 14 18:55:50 2024 00:35:35.650 read: IOPS=1077, BW=4312KiB/s (4415kB/s)(4316KiB/1001msec) 00:35:35.650 slat (nsec): min=10376, max=62813, avg=15497.73, stdev=4432.07 00:35:35.650 clat (usec): min=199, max=41217, avg=424.62, stdev=1244.03 00:35:35.650 lat (usec): min=217, max=41235, avg=440.12, stdev=1244.13 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 247], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 351], 00:35:35.650 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:35:35.650 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 482], 00:35:35.650 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 799], 99.95th=[41157], 00:35:35.650 | 99.99th=[41157] 00:35:35.650 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:35.650 slat (usec): min=14, max=108, avg=24.72, stdev= 7.27 00:35:35.650 clat (usec): min=168, max=2807, avg=313.83, stdev=85.14 00:35:35.650 lat (usec): min=193, max=2832, avg=338.55, stdev=85.24 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 233], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 273], 00:35:35.650 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:35:35.650 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 400], 00:35:35.650 | 99.00th=[ 461], 99.50th=[ 523], 99.90th=[ 1352], 99.95th=[ 2802], 00:35:35.650 | 99.99th=[ 2802] 00:35:35.650 bw ( KiB/s): min= 5816, max= 5816, per=20.30%, avg=5816.00, stdev= 0.00, samples=1 00:35:35.650 iops : min= 1454, max= 1454, avg=1454.00, stdev= 0.00, samples=1 00:35:35.650 lat (usec) : 250=2.52%, 500=96.18%, 750=1.15%, 1000=0.04% 00:35:35.650 lat (msec) : 2=0.04%, 4=0.04%, 50=0.04% 00:35:35.650 cpu : usr=0.70%, sys=4.50%, ctx=2615, majf=0, minf=11 00:35:35.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 issued rwts: total=1079,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.650 job1: (groupid=0, jobs=1): err= 0: pid=126589: Sat Dec 14 18:55:50 2024 00:35:35.650 read: IOPS=1716, BW=6865KiB/s (7030kB/s)(6872KiB/1001msec) 00:35:35.650 slat (nsec): min=13571, max=86320, avg=18425.48, stdev=7176.23 00:35:35.650 clat (usec): min=188, max=684, avg=270.90, stdev=31.00 00:35:35.650 lat (usec): min=208, max=699, avg=289.33, stdev=31.61 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 235], 20.00th=[ 251], 00:35:35.650 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:35:35.650 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:35:35.650 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 437], 99.95th=[ 685], 00:35:35.650 | 99.99th=[ 685] 00:35:35.650 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:35:35.650 slat (usec): min=19, max=109, avg=31.35, stdev=12.70 00:35:35.650 clat (usec): min=125, max=3081, avg=210.41, stdev=73.57 00:35:35.650 lat (usec): min=151, max=3119, avg=241.76, stdev=76.63 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 180], 00:35:35.650 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 212], 00:35:35.650 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 273], 00:35:35.650 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 635], 99.95th=[ 701], 00:35:35.650 | 99.99th=[ 3097] 00:35:35.650 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:35:35.650 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:35.650 lat (usec) : 250=56.64%, 500=43.26%, 750=0.08% 00:35:35.650 lat (msec) : 4=0.03% 00:35:35.650 cpu : usr=1.80%, sys=6.70%, ctx=3766, majf=0, minf=11 00:35:35.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 issued rwts: total=1718,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.650 job2: (groupid=0, jobs=1): err= 0: pid=126590: Sat Dec 14 18:55:50 2024 00:35:35.650 read: IOPS=1737, BW=6949KiB/s (7116kB/s)(6956KiB/1001msec) 00:35:35.650 slat (nsec): min=13484, max=71728, avg=18513.36, stdev=5696.48 00:35:35.650 clat (usec): min=182, max=393, avg=268.51, stdev=30.53 00:35:35.650 lat (usec): min=200, max=413, avg=287.02, stdev=31.23 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 227], 20.00th=[ 251], 00:35:35.650 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:35:35.650 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:35:35.650 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 375], 99.95th=[ 396], 00:35:35.650 | 99.99th=[ 396] 00:35:35.650 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:35:35.650 slat (usec): min=19, max=101, avg=30.93, stdev=11.45 00:35:35.650 clat (usec): min=125, max=512, avg=209.57, stdev=35.33 00:35:35.650 lat (usec): min=156, max=533, avg=240.50, stdev=40.69 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 143], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 182], 00:35:35.650 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 215], 00:35:35.650 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 273], 00:35:35.650 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 400], 99.95th=[ 441], 00:35:35.650 | 99.99th=[ 515] 00:35:35.650 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:35:35.650 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:35.650 lat (usec) : 250=56.48%, 500=43.49%, 750=0.03% 00:35:35.650 cpu : usr=1.70%, sys=6.90%, ctx=3788, majf=0, minf=6 00:35:35.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 issued rwts: total=1739,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.650 job3: (groupid=0, jobs=1): err= 0: pid=126591: Sat Dec 14 18:55:50 2024 00:35:35.650 read: IOPS=1078, BW=4316KiB/s (4419kB/s)(4320KiB/1001msec) 00:35:35.650 slat (nsec): min=9459, max=56420, avg=16457.57, stdev=5137.61 00:35:35.650 clat (usec): min=217, max=41233, avg=423.64, stdev=1243.94 00:35:35.650 lat (usec): min=233, max=41249, avg=440.10, stdev=1243.92 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 258], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 351], 00:35:35.650 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:35:35.650 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 469], 00:35:35.650 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 816], 99.95th=[41157], 00:35:35.650 | 99.99th=[41157] 00:35:35.650 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:35.650 slat (usec): min=11, max=112, avg=24.53, stdev= 7.97 00:35:35.650 clat (usec): min=171, max=2891, avg=313.97, stdev=83.88 00:35:35.650 lat (usec): min=199, max=2919, avg=338.51, stdev=84.34 00:35:35.650 clat percentiles (usec): 00:35:35.650 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 273], 00:35:35.650 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:35:35.650 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 392], 00:35:35.650 | 99.00th=[ 465], 99.50th=[ 586], 99.90th=[ 930], 99.95th=[ 2900], 00:35:35.650 | 99.99th=[ 2900] 00:35:35.650 bw ( KiB/s): min= 5808, max= 5808, per=20.28%, avg=5808.00, stdev= 0.00, samples=1 00:35:35.650 iops : min= 1452, max= 1452, avg=1452.00, stdev= 0.00, samples=1 00:35:35.650 lat (usec) : 250=2.26%, 500=96.44%, 750=1.15%, 1000=0.08% 00:35:35.650 lat (msec) : 4=0.04%, 50=0.04% 00:35:35.650 cpu : usr=0.90%, sys=4.40%, ctx=2617, majf=0, minf=17 00:35:35.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.650 issued rwts: total=1080,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.650 00:35:35.650 Run status group 0 (all jobs): 00:35:35.650 READ: bw=21.9MiB/s (23.0MB/s), 4312KiB/s-6949KiB/s (4415kB/s-7116kB/s), io=21.9MiB (23.0MB), run=1001-1001msec 00:35:35.650 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:35:35.650 00:35:35.650 Disk stats (read/write): 00:35:35.650 nvme0n1: ios=1074/1193, merge=0/0, ticks=465/386, in_queue=851, util=87.88% 00:35:35.650 nvme0n2: ios=1575/1699, merge=0/0, ticks=447/388, in_queue=835, util=88.54% 00:35:35.650 nvme0n3: ios=1542/1699, merge=0/0, ticks=427/390, in_queue=817, util=89.13% 00:35:35.650 nvme0n4: ios=1024/1193, merge=0/0, ticks=436/391, in_queue=827, util=89.69% 00:35:35.650 18:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:35.650 [global] 00:35:35.650 thread=1 00:35:35.650 invalidate=1 00:35:35.650 rw=write 00:35:35.650 time_based=1 00:35:35.650 runtime=1 00:35:35.650 ioengine=libaio 00:35:35.650 direct=1 00:35:35.650 bs=4096 00:35:35.650 iodepth=128 00:35:35.650 norandommap=0 00:35:35.650 numjobs=1 00:35:35.650 00:35:35.650 verify_dump=1 00:35:35.650 verify_backlog=512 00:35:35.650 verify_state_save=0 00:35:35.650 do_verify=1 00:35:35.650 verify=crc32c-intel 00:35:35.650 [job0] 00:35:35.650 filename=/dev/nvme0n1 00:35:35.650 [job1] 00:35:35.650 filename=/dev/nvme0n2 00:35:35.650 [job2] 00:35:35.650 filename=/dev/nvme0n3 00:35:35.650 [job3] 00:35:35.650 filename=/dev/nvme0n4 00:35:35.650 Could not set queue depth (nvme0n1) 00:35:35.650 Could not set queue depth (nvme0n2) 00:35:35.650 Could not set queue depth (nvme0n3) 00:35:35.650 Could not set queue depth (nvme0n4) 00:35:35.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.650 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.651 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.651 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.651 fio-3.35 00:35:35.651 Starting 4 threads 00:35:36.588 00:35:36.588 job0: (groupid=0, jobs=1): err= 0: pid=126649: Sat Dec 14 18:55:52 2024 00:35:36.588 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:35:36.588 slat (usec): min=4, max=6238, avg=110.37, stdev=507.25 00:35:36.588 clat (usec): min=9400, max=30562, avg=14905.23, stdev=3509.28 00:35:36.588 lat (usec): min=9413, max=33564, avg=15015.60, stdev=3540.05 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[10552], 5.00th=[11338], 10.00th=[12387], 20.00th=[12911], 00:35:36.588 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:35:36.588 | 70.00th=[14746], 80.00th=[15139], 90.00th=[21890], 95.00th=[23200], 00:35:36.588 | 99.00th=[27395], 99.50th=[27657], 99.90th=[30540], 99.95th=[30540], 00:35:36.588 | 99.99th=[30540] 00:35:36.588 write: IOPS=4523, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1005msec); 0 zone resets 00:35:36.588 slat (usec): min=11, max=4623, avg=113.02, stdev=515.47 00:35:36.588 clat (usec): min=3124, max=32835, avg=14489.79, stdev=4233.96 00:35:36.588 lat (usec): min=4532, max=32866, avg=14602.81, stdev=4247.17 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[12387], 00:35:36.588 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:36.588 | 70.00th=[14615], 80.00th=[15008], 90.00th=[16909], 95.00th=[26346], 00:35:36.588 | 99.00th=[30278], 99.50th=[31589], 99.90th=[32900], 99.95th=[32900], 00:35:36.588 | 99.99th=[32900] 00:35:36.588 bw ( KiB/s): min=15208, max=20136, per=35.02%, avg=17672.00, stdev=3484.62, samples=2 00:35:36.588 iops : min= 3802, max= 5034, avg=4418.00, stdev=871.16, samples=2 00:35:36.588 lat (msec) : 4=0.01%, 10=1.77%, 20=88.12%, 50=10.10% 00:35:36.588 cpu : usr=4.38%, sys=12.45%, ctx=414, majf=0, minf=12 00:35:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:36.588 issued rwts: total=4096,4546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:36.588 job1: (groupid=0, jobs=1): err= 0: pid=126650: Sat Dec 14 18:55:52 2024 00:35:36.588 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:35:36.588 slat (usec): min=6, max=9205, avg=289.71, stdev=1053.67 00:35:36.588 clat (usec): min=15708, max=50294, avg=35627.74, stdev=7331.41 00:35:36.588 lat (usec): min=18158, max=50317, avg=35917.44, stdev=7364.38 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[18220], 5.00th=[20317], 10.00th=[23987], 20.00th=[29230], 00:35:36.588 | 30.00th=[32375], 40.00th=[35914], 50.00th=[38011], 60.00th=[39060], 00:35:36.588 | 70.00th=[40109], 80.00th=[41681], 90.00th=[43254], 95.00th=[44827], 00:35:36.588 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[50070], 00:35:36.588 | 99.99th=[50070] 00:35:36.588 write: IOPS=1886, BW=7546KiB/s (7727kB/s)(7584KiB/1005msec); 0 zone resets 00:35:36.588 slat (usec): min=16, max=9547, avg=284.12, stdev=932.14 00:35:36.588 clat (usec): min=2920, max=49096, avg=37235.51, stdev=8194.36 00:35:36.588 lat (usec): min=4187, max=49139, avg=37519.64, stdev=8186.73 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[ 7767], 5.00th=[22152], 10.00th=[26084], 20.00th=[29492], 00:35:36.588 | 30.00th=[37487], 40.00th=[39060], 50.00th=[40109], 60.00th=[41681], 00:35:36.588 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43779], 95.00th=[44303], 00:35:36.588 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:35:36.588 | 99.99th=[49021] 00:35:36.588 bw ( KiB/s): min= 6712, max= 7432, per=14.02%, avg=7072.00, stdev=509.12, samples=2 00:35:36.588 iops : min= 1678, max= 1858, avg=1768.00, stdev=127.28, samples=2 00:35:36.588 lat (msec) : 4=0.03%, 10=0.93%, 20=3.32%, 50=95.69%, 100=0.03% 00:35:36.588 cpu : usr=2.29%, sys=6.57%, ctx=505, majf=0, minf=15 00:35:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:35:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:36.588 issued rwts: total=1536,1896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:36.588 job2: (groupid=0, jobs=1): err= 0: pid=126652: Sat Dec 14 18:55:52 2024 00:35:36.588 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 00:35:36.588 slat (usec): min=5, max=9174, avg=272.73, stdev=1028.79 00:35:36.588 clat (usec): min=17086, max=49419, avg=34406.95, stdev=8952.67 00:35:36.588 lat (usec): min=18106, max=49464, avg=34679.68, stdev=9014.80 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[18220], 5.00th=[21103], 10.00th=[21890], 20.00th=[22676], 00:35:36.588 | 30.00th=[23462], 40.00th=[37487], 50.00th=[38536], 60.00th=[39584], 00:35:36.588 | 70.00th=[40633], 80.00th=[42206], 90.00th=[43254], 95.00th=[44303], 00:35:36.588 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[49546], 00:35:36.588 | 99.99th=[49546] 00:35:36.588 write: IOPS=2017, BW=8072KiB/s (8266kB/s)(8088KiB/1002msec); 0 zone resets 00:35:36.588 slat (usec): min=11, max=9363, avg=279.07, stdev=973.89 00:35:36.588 clat (usec): min=671, max=50265, avg=35771.32, stdev=9649.87 00:35:36.588 lat (usec): min=5710, max=50286, avg=36050.39, stdev=9662.63 00:35:36.588 clat percentiles (usec): 00:35:36.588 | 1.00th=[ 6259], 5.00th=[17433], 10.00th=[20579], 20.00th=[24511], 00:35:36.588 | 30.00th=[37487], 40.00th=[38536], 50.00th=[39584], 60.00th=[41157], 00:35:36.588 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43779], 95.00th=[44303], 00:35:36.588 | 99.00th=[46924], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:35:36.588 | 99.99th=[50070] 00:35:36.588 bw ( KiB/s): min= 7446, max= 7720, per=15.03%, avg=7583.00, stdev=193.75, samples=2 00:35:36.588 iops : min= 1861, max= 1930, avg=1895.50, stdev=48.79, samples=2 00:35:36.588 lat (usec) : 750=0.03% 00:35:36.588 lat (msec) : 10=0.90%, 20=5.51%, 50=93.45%, 100=0.11% 00:35:36.588 cpu : usr=1.80%, sys=7.39%, ctx=482, majf=0, minf=7 00:35:36.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:35:36.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:36.588 issued rwts: total=1536,2022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:36.588 job3: (groupid=0, jobs=1): err= 0: pid=126653: Sat Dec 14 18:55:52 2024 00:35:36.588 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:35:36.588 slat (usec): min=3, max=4039, avg=118.38, stdev=544.72 00:35:36.589 clat (usec): min=11564, max=20042, avg=15639.28, stdev=1253.13 00:35:36.589 lat (usec): min=11924, max=21399, avg=15757.66, stdev=1195.29 00:35:36.589 clat percentiles (usec): 00:35:36.589 | 1.00th=[12256], 5.00th=[13435], 10.00th=[14222], 20.00th=[14877], 00:35:36.589 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:35:36.589 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17957], 00:35:36.589 | 99.00th=[18482], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:35:36.589 | 99.99th=[20055] 00:35:36.589 write: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec); 0 zone resets 00:35:36.589 slat (usec): min=8, max=4380, avg=115.63, stdev=505.62 00:35:36.589 clat (usec): min=518, max=18824, avg=14871.15, stdev=1873.87 00:35:36.589 lat (usec): min=4180, max=18846, avg=14986.78, stdev=1852.28 00:35:36.589 clat percentiles (usec): 00:35:36.589 | 1.00th=[ 8356], 5.00th=[12256], 10.00th=[12649], 20.00th=[13304], 00:35:36.589 | 30.00th=[13829], 40.00th=[14877], 50.00th=[15270], 60.00th=[15533], 00:35:36.589 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:35:36.589 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:35:36.589 | 99.99th=[18744] 00:35:36.589 bw ( KiB/s): min=16384, max=16400, per=32.49%, avg=16392.00, stdev=11.31, samples=2 00:35:36.589 iops : min= 4096, max= 4100, avg=4098.00, stdev= 2.83, samples=2 00:35:36.589 lat (usec) : 750=0.01% 00:35:36.589 lat (msec) : 10=0.85%, 20=99.05%, 50=0.08% 00:35:36.589 cpu : usr=3.29%, sys=11.28%, ctx=458, majf=0, minf=11 00:35:36.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:36.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:36.589 issued rwts: total=4096,4214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:36.589 00:35:36.589 Run status group 0 (all jobs): 00:35:36.589 READ: bw=43.8MiB/s (45.9MB/s), 6113KiB/s-16.0MiB/s (6260kB/s-16.7MB/s), io=44.0MiB (46.1MB), run=1002-1005msec 00:35:36.589 WRITE: bw=49.3MiB/s (51.7MB/s), 7546KiB/s-17.7MiB/s (7727kB/s-18.5MB/s), io=49.5MiB (51.9MB), run=1002-1005msec 00:35:36.589 00:35:36.589 Disk stats (read/write): 00:35:36.589 nvme0n1: ios=3889/4096, merge=0/0, ticks=16989/15431, in_queue=32420, util=88.28% 00:35:36.589 nvme0n2: ios=1227/1536, merge=0/0, ticks=11644/14507, in_queue=26151, util=88.37% 00:35:36.589 nvme0n3: ios=1230/1536, merge=0/0, ticks=11403/15143, in_queue=26546, util=88.91% 00:35:36.589 nvme0n4: ios=3552/3584, merge=0/0, ticks=13340/12241, in_queue=25581, util=89.75% 00:35:36.589 18:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:36.848 [global] 00:35:36.848 thread=1 00:35:36.848 invalidate=1 00:35:36.848 rw=randwrite 00:35:36.848 time_based=1 00:35:36.848 runtime=1 00:35:36.848 ioengine=libaio 00:35:36.848 direct=1 00:35:36.848 bs=4096 00:35:36.848 iodepth=128 00:35:36.848 norandommap=0 00:35:36.848 numjobs=1 00:35:36.848 00:35:36.848 verify_dump=1 00:35:36.848 verify_backlog=512 00:35:36.848 verify_state_save=0 00:35:36.848 do_verify=1 00:35:36.848 verify=crc32c-intel 00:35:36.848 [job0] 00:35:36.848 filename=/dev/nvme0n1 00:35:36.848 [job1] 00:35:36.848 filename=/dev/nvme0n2 00:35:36.848 [job2] 00:35:36.848 filename=/dev/nvme0n3 00:35:36.848 [job3] 00:35:36.848 filename=/dev/nvme0n4 00:35:36.848 Could not set queue depth (nvme0n1) 00:35:36.848 Could not set queue depth (nvme0n2) 00:35:36.848 Could not set queue depth (nvme0n3) 00:35:36.848 Could not set queue depth (nvme0n4) 00:35:36.848 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:36.848 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:36.848 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:36.848 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:36.848 fio-3.35 00:35:36.848 Starting 4 threads 00:35:38.226 00:35:38.226 job0: (groupid=0, jobs=1): err= 0: pid=126708: Sat Dec 14 18:55:53 2024 00:35:38.226 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:35:38.226 slat (usec): min=9, max=3459, avg=104.45, stdev=419.86 00:35:38.226 clat (usec): min=9099, max=16621, avg=13779.77, stdev=877.28 00:35:38.226 lat (usec): min=9112, max=17705, avg=13884.21, stdev=805.00 00:35:38.226 clat percentiles (usec): 00:35:38.226 | 1.00th=[10945], 5.00th=[12125], 10.00th=[12780], 20.00th=[13173], 00:35:38.226 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:35:38.227 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14746], 95.00th=[15008], 00:35:38.227 | 99.00th=[15795], 99.50th=[15795], 99.90th=[16450], 99.95th=[16581], 00:35:38.227 | 99.99th=[16581] 00:35:38.227 write: IOPS=4670, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1001msec); 0 zone resets 00:35:38.227 slat (usec): min=11, max=3742, avg=102.98, stdev=421.39 00:35:38.227 clat (usec): min=280, max=16953, avg=13447.81, stdev=1673.40 00:35:38.227 lat (usec): min=3687, max=16977, avg=13550.79, stdev=1664.78 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[ 7111], 5.00th=[11076], 10.00th=[11469], 20.00th=[12125], 00:35:38.227 | 30.00th=[12649], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:38.227 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:35:38.227 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:35:38.227 | 99.99th=[16909] 00:35:38.227 bw ( KiB/s): min=19768, max=19768, per=41.66%, avg=19768.00, stdev= 0.00, samples=1 00:35:38.227 iops : min= 4942, max= 4942, avg=4942.00, stdev= 0.00, samples=1 00:35:38.227 lat (usec) : 500=0.01% 00:35:38.227 lat (msec) : 4=0.12%, 10=0.71%, 20=99.16% 00:35:38.227 cpu : usr=3.10%, sys=15.00%, ctx=672, majf=0, minf=13 00:35:38.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:38.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.227 issued rwts: total=4608,4675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.227 job1: (groupid=0, jobs=1): err= 0: pid=126709: Sat Dec 14 18:55:53 2024 00:35:38.227 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:35:38.227 slat (usec): min=11, max=9254, avg=322.32, stdev=1135.08 00:35:38.227 clat (usec): min=13804, max=50589, avg=39932.93, stdev=4271.77 00:35:38.227 lat (usec): min=13825, max=50608, avg=40255.25, stdev=4303.57 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[20055], 5.00th=[34341], 10.00th=[36439], 20.00th=[38011], 00:35:38.227 | 30.00th=[38536], 40.00th=[39584], 50.00th=[40109], 60.00th=[40633], 00:35:38.227 | 70.00th=[41681], 80.00th=[42730], 90.00th=[44303], 95.00th=[45351], 00:35:38.227 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:35:38.227 | 99.99th=[50594] 00:35:38.227 write: IOPS=1560, BW=6243KiB/s (6393kB/s)(6268KiB/1004msec); 0 zone resets 00:35:38.227 slat (usec): min=11, max=10652, avg=314.17, stdev=1096.20 00:35:38.227 clat (usec): min=386, max=51013, avg=40999.69, stdev=5377.95 00:35:38.227 lat (usec): min=7289, max=51035, avg=41313.86, stdev=5283.95 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[ 8455], 5.00th=[36963], 10.00th=[38536], 20.00th=[39060], 00:35:38.227 | 30.00th=[39584], 40.00th=[40633], 50.00th=[41681], 60.00th=[42206], 00:35:38.227 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44827], 95.00th=[47973], 00:35:38.227 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:35:38.227 | 99.99th=[51119] 00:35:38.227 bw ( KiB/s): min= 4856, max= 7432, per=12.95%, avg=6144.00, stdev=1821.51, samples=2 00:35:38.227 iops : min= 1214, max= 1858, avg=1536.00, stdev=455.38, samples=2 00:35:38.227 lat (usec) : 500=0.03% 00:35:38.227 lat (msec) : 10=0.48%, 20=0.87%, 50=98.00%, 100=0.61% 00:35:38.227 cpu : usr=1.99%, sys=6.18%, ctx=530, majf=0, minf=17 00:35:38.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:35:38.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.227 issued rwts: total=1536,1567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.227 job2: (groupid=0, jobs=1): err= 0: pid=126710: Sat Dec 14 18:55:53 2024 00:35:38.227 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:35:38.227 slat (usec): min=5, max=5123, avg=119.68, stdev=541.89 00:35:38.227 clat (usec): min=1729, max=18512, avg=15635.60, stdev=1595.68 00:35:38.227 lat (usec): min=3479, max=18955, avg=15755.28, stdev=1516.56 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[ 7439], 5.00th=[13435], 10.00th=[14746], 20.00th=[15270], 00:35:38.227 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:35:38.227 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:35:38.227 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:35:38.227 | 99.99th=[18482] 00:35:38.227 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:35:38.227 slat (usec): min=13, max=4085, avg=116.19, stdev=497.64 00:35:38.227 clat (usec): min=11338, max=18931, avg=15285.46, stdev=1607.99 00:35:38.227 lat (usec): min=11361, max=18951, avg=15401.65, stdev=1600.14 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[12387], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:35:38.227 | 30.00th=[13960], 40.00th=[14877], 50.00th=[15664], 60.00th=[15926], 00:35:38.227 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:35:38.227 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:35:38.227 | 99.99th=[19006] 00:35:38.227 bw ( KiB/s): min=16384, max=16384, per=34.53%, avg=16384.00, stdev= 0.00, samples=2 00:35:38.227 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:35:38.227 lat (msec) : 2=0.01%, 4=0.26%, 10=0.52%, 20=99.21% 00:35:38.227 cpu : usr=3.59%, sys=13.07%, ctx=478, majf=0, minf=13 00:35:38.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:38.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.227 issued rwts: total=4096,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.227 job3: (groupid=0, jobs=1): err= 0: pid=126711: Sat Dec 14 18:55:53 2024 00:35:38.227 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 00:35:38.227 slat (usec): min=11, max=9566, avg=324.86, stdev=1098.24 00:35:38.227 clat (usec): min=14896, max=47975, avg=39818.59, stdev=4456.00 00:35:38.227 lat (usec): min=14915, max=48393, avg=40143.44, stdev=4468.07 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[15270], 5.00th=[35390], 10.00th=[36439], 20.00th=[38011], 00:35:38.227 | 30.00th=[39060], 40.00th=[39584], 50.00th=[40109], 60.00th=[41157], 00:35:38.227 | 70.00th=[41681], 80.00th=[42206], 90.00th=[44303], 95.00th=[45876], 00:35:38.227 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:35:38.227 | 99.99th=[47973] 00:35:38.227 write: IOPS=1569, BW=6279KiB/s (6430kB/s)(6292KiB/1002msec); 0 zone resets 00:35:38.227 slat (usec): min=16, max=10157, avg=308.90, stdev=1051.60 00:35:38.227 clat (usec): min=583, max=51863, avg=40831.36, stdev=5667.27 00:35:38.227 lat (usec): min=6808, max=51885, avg=41140.26, stdev=5574.80 00:35:38.227 clat percentiles (usec): 00:35:38.227 | 1.00th=[ 7308], 5.00th=[38536], 10.00th=[39060], 20.00th=[39584], 00:35:38.227 | 30.00th=[40109], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:38.227 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43779], 95.00th=[46400], 00:35:38.227 | 99.00th=[48497], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:35:38.227 | 99.99th=[51643] 00:35:38.227 bw ( KiB/s): min= 4760, max= 7512, per=12.93%, avg=6136.00, stdev=1945.96, samples=2 00:35:38.227 iops : min= 1190, max= 1878, avg=1534.00, stdev=486.49, samples=2 00:35:38.227 lat (usec) : 750=0.03% 00:35:38.227 lat (msec) : 10=1.03%, 20=0.74%, 50=98.04%, 100=0.16% 00:35:38.227 cpu : usr=2.00%, sys=6.09%, ctx=514, majf=0, minf=7 00:35:38.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:35:38.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.227 issued rwts: total=1536,1573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.227 00:35:38.227 Run status group 0 (all jobs): 00:35:38.227 READ: bw=45.8MiB/s (48.0MB/s), 6120KiB/s-18.0MiB/s (6266kB/s-18.9MB/s), io=46.0MiB (48.2MB), run=1001-1004msec 00:35:38.227 WRITE: bw=46.3MiB/s (48.6MB/s), 6243KiB/s-18.2MiB/s (6393kB/s-19.1MB/s), io=46.5MiB (48.8MB), run=1001-1004msec 00:35:38.227 00:35:38.227 Disk stats (read/write): 00:35:38.227 nvme0n1: ios=3896/4096, merge=0/0, ticks=12451/12194, in_queue=24645, util=87.46% 00:35:38.227 nvme0n2: ios=1118/1536, merge=0/0, ticks=11153/14768, in_queue=25921, util=87.88% 00:35:38.227 nvme0n3: ios=3443/3584, merge=0/0, ticks=12520/11859, in_queue=24379, util=89.09% 00:35:38.227 nvme0n4: ios=1092/1536, merge=0/0, ticks=11024/15113, in_queue=26137, util=89.12% 00:35:38.227 18:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:38.227 18:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=126725 00:35:38.227 18:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:38.227 18:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:38.227 [global] 00:35:38.227 thread=1 00:35:38.227 invalidate=1 00:35:38.227 rw=read 00:35:38.227 time_based=1 00:35:38.227 runtime=10 00:35:38.227 ioengine=libaio 00:35:38.227 direct=1 00:35:38.227 bs=4096 00:35:38.227 iodepth=1 00:35:38.227 norandommap=1 00:35:38.227 numjobs=1 00:35:38.227 00:35:38.227 [job0] 00:35:38.227 filename=/dev/nvme0n1 00:35:38.227 [job1] 00:35:38.227 filename=/dev/nvme0n2 00:35:38.227 [job2] 00:35:38.227 filename=/dev/nvme0n3 00:35:38.227 [job3] 00:35:38.227 filename=/dev/nvme0n4 00:35:38.227 Could not set queue depth (nvme0n1) 00:35:38.227 Could not set queue depth (nvme0n2) 00:35:38.227 Could not set queue depth (nvme0n3) 00:35:38.227 Could not set queue depth (nvme0n4) 00:35:38.227 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:38.227 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:38.227 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:38.227 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:38.227 fio-3.35 00:35:38.227 Starting 4 threads 00:35:41.515 18:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:41.515 fio: pid=126768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:41.515 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=30113792, buflen=4096 00:35:41.515 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:41.774 fio: pid=126767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:41.774 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36573184, buflen=4096 00:35:41.774 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:41.774 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:42.033 fio: pid=126765, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.033 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47738880, buflen=4096 00:35:42.033 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.033 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:42.292 fio: pid=126766, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.292 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=39063552, buflen=4096 00:35:42.292 00:35:42.292 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=126765: Sat Dec 14 18:55:57 2024 00:35:42.292 read: IOPS=3367, BW=13.2MiB/s (13.8MB/s)(45.5MiB/3461msec) 00:35:42.292 slat (usec): min=11, max=11207, avg=19.83, stdev=184.35 00:35:42.292 clat (usec): min=159, max=3980, avg=275.47, stdev=76.16 00:35:42.292 lat (usec): min=173, max=12688, avg=295.30, stdev=212.79 00:35:42.292 clat percentiles (usec): 00:35:42.292 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 225], 20.00th=[ 247], 00:35:42.292 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:35:42.292 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 334], 00:35:42.292 | 99.00th=[ 383], 99.50th=[ 486], 99.90th=[ 1303], 99.95th=[ 2040], 00:35:42.292 | 99.99th=[ 2474] 00:35:42.292 bw ( KiB/s): min=12936, max=13608, per=33.07%, avg=13321.33, stdev=256.02, samples=6 00:35:42.292 iops : min= 3234, max= 3402, avg=3330.33, stdev=64.01, samples=6 00:35:42.292 lat (usec) : 250=22.71%, 500=76.79%, 750=0.25%, 1000=0.10% 00:35:42.292 lat (msec) : 2=0.09%, 4=0.05% 00:35:42.292 cpu : usr=1.04%, sys=4.31%, ctx=11660, majf=0, minf=1 00:35:42.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 issued rwts: total=11656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.292 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=126766: Sat Dec 14 18:55:57 2024 00:35:42.292 read: IOPS=2563, BW=10.0MiB/s (10.5MB/s)(37.3MiB/3721msec) 00:35:42.292 slat (usec): min=9, max=12243, avg=21.87, stdev=227.22 00:35:42.292 clat (usec): min=3, max=166851, avg=366.58, stdev=1707.75 00:35:42.292 lat (usec): min=164, max=166867, avg=388.45, stdev=1722.23 00:35:42.292 clat percentiles (usec): 00:35:42.292 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 208], 20.00th=[ 251], 00:35:42.292 | 30.00th=[ 343], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 388], 00:35:42.292 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 420], 95.00th=[ 433], 00:35:42.292 | 99.00th=[ 465], 99.50th=[ 494], 99.90th=[ 963], 99.95th=[ 1713], 00:35:42.292 | 99.99th=[166724] 00:35:42.292 bw ( KiB/s): min= 9744, max=10810, per=25.01%, avg=10076.86, stdev=406.13, samples=7 00:35:42.292 iops : min= 2436, max= 2702, avg=2519.14, stdev=101.38, samples=7 00:35:42.292 lat (usec) : 4=0.01%, 250=19.94%, 500=79.61%, 750=0.33%, 1000=0.02% 00:35:42.292 lat (msec) : 2=0.04%, 4=0.03%, 250=0.01% 00:35:42.292 cpu : usr=0.62%, sys=3.41%, ctx=9556, majf=0, minf=2 00:35:42.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 issued rwts: total=9538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.292 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=126767: Sat Dec 14 18:55:57 2024 00:35:42.292 read: IOPS=2775, BW=10.8MiB/s (11.4MB/s)(34.9MiB/3218msec) 00:35:42.292 slat (usec): min=15, max=8740, avg=24.71, stdev=120.51 00:35:42.292 clat (usec): min=243, max=5339, avg=333.41, stdev=81.27 00:35:42.292 lat (usec): min=264, max=9123, avg=358.13, stdev=145.64 00:35:42.292 clat percentiles (usec): 00:35:42.292 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:35:42.292 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:35:42.292 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 392], 00:35:42.292 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 947], 99.95th=[ 1532], 00:35:42.292 | 99.99th=[ 5342] 00:35:42.292 bw ( KiB/s): min=11176, max=11336, per=27.93%, avg=11252.00, stdev=64.55, samples=6 00:35:42.292 iops : min= 2794, max= 2834, avg=2813.00, stdev=16.14, samples=6 00:35:42.292 lat (usec) : 250=0.06%, 500=99.68%, 750=0.10%, 1000=0.07% 00:35:42.292 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:35:42.292 cpu : usr=1.18%, sys=4.94%, ctx=8933, majf=0, minf=2 00:35:42.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.292 issued rwts: total=8930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.292 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=126768: Sat Dec 14 18:55:57 2024 00:35:42.292 read: IOPS=2474, BW=9898KiB/s (10.1MB/s)(28.7MiB/2971msec) 00:35:42.292 slat (usec): min=9, max=130, avg=15.46, stdev= 5.21 00:35:42.292 clat (usec): min=167, max=2835, avg=387.22, stdev=47.15 00:35:42.292 lat (usec): min=200, max=2853, avg=402.68, stdev=47.24 00:35:42.292 clat percentiles (usec): 00:35:42.293 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 359], 00:35:42.293 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:35:42.293 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 424], 95.00th=[ 437], 00:35:42.293 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 635], 00:35:42.293 | 99.99th=[ 2835] 00:35:42.293 bw ( KiB/s): min= 9736, max=10000, per=24.45%, avg=9851.20, stdev=96.76, samples=5 00:35:42.293 iops : min= 2434, max= 2500, avg=2462.80, stdev=24.19, samples=5 00:35:42.293 lat (usec) : 250=0.05%, 500=99.50%, 750=0.39%, 1000=0.01% 00:35:42.293 lat (msec) : 2=0.01%, 4=0.01% 00:35:42.293 cpu : usr=0.74%, sys=2.96%, ctx=7370, majf=0, minf=2 00:35:42.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.293 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.293 issued rwts: total=7353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.293 00:35:42.293 Run status group 0 (all jobs): 00:35:42.293 READ: bw=39.3MiB/s (41.2MB/s), 9898KiB/s-13.2MiB/s (10.1MB/s-13.8MB/s), io=146MiB (153MB), run=2971-3721msec 00:35:42.293 00:35:42.293 Disk stats (read/write): 00:35:42.293 nvme0n1: ios=11230/0, merge=0/0, ticks=3171/0, in_queue=3171, util=95.22% 00:35:42.293 nvme0n2: ios=9120/0, merge=0/0, ticks=3443/0, in_queue=3443, util=95.40% 00:35:42.293 nvme0n3: ios=8678/0, merge=0/0, ticks=2937/0, in_queue=2937, util=96.30% 00:35:42.293 nvme0n4: ios=7085/0, merge=0/0, ticks=2767/0, in_queue=2767, util=96.76% 00:35:42.293 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.293 18:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:42.552 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.552 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:42.811 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.811 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:43.070 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.070 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:43.329 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.329 18:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 126725 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:43.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:43.588 nvmf hotplug test: fio failed as expected 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:43.588 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.848 rmmod nvme_tcp 00:35:43.848 rmmod nvme_fabrics 00:35:43.848 rmmod nvme_keyring 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 126257 ']' 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 126257 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 126257 ']' 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 126257 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:43.848 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126257 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:44.107 killing process with pid 126257 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126257' 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 126257 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 126257 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:44.107 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:44.366 18:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:35:44.366 00:35:44.366 real 0m19.509s 00:35:44.366 user 0m59.414s 00:35:44.366 sys 0m10.109s 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:44.366 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:44.366 ************************************ 00:35:44.366 END TEST nvmf_fio_target 00:35:44.366 ************************************ 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:44.626 ************************************ 00:35:44.626 START TEST nvmf_bdevio 00:35:44.626 ************************************ 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:44.626 * Looking for test storage... 00:35:44.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.626 --rc genhtml_branch_coverage=1 00:35:44.626 --rc genhtml_function_coverage=1 00:35:44.626 --rc genhtml_legend=1 00:35:44.626 --rc geninfo_all_blocks=1 00:35:44.626 --rc geninfo_unexecuted_blocks=1 00:35:44.626 00:35:44.626 ' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.626 --rc genhtml_branch_coverage=1 00:35:44.626 --rc genhtml_function_coverage=1 00:35:44.626 --rc genhtml_legend=1 00:35:44.626 --rc geninfo_all_blocks=1 00:35:44.626 --rc geninfo_unexecuted_blocks=1 00:35:44.626 00:35:44.626 ' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.626 --rc genhtml_branch_coverage=1 00:35:44.626 --rc genhtml_function_coverage=1 00:35:44.626 --rc genhtml_legend=1 00:35:44.626 --rc geninfo_all_blocks=1 00:35:44.626 --rc geninfo_unexecuted_blocks=1 00:35:44.626 00:35:44.626 ' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.626 --rc genhtml_branch_coverage=1 00:35:44.626 --rc genhtml_function_coverage=1 00:35:44.626 --rc genhtml_legend=1 00:35:44.626 --rc geninfo_all_blocks=1 00:35:44.626 --rc geninfo_unexecuted_blocks=1 00:35:44.626 00:35:44.626 ' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:44.626 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:44.627 Cannot find device "nvmf_init_br" 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:35:44.627 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:44.887 Cannot find device "nvmf_init_br2" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:44.887 Cannot find device "nvmf_tgt_br" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:44.887 Cannot find device "nvmf_tgt_br2" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:44.887 Cannot find device "nvmf_init_br" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:44.887 Cannot find device "nvmf_init_br2" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:44.887 Cannot find device "nvmf_tgt_br" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:44.887 Cannot find device "nvmf_tgt_br2" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:44.887 Cannot find device "nvmf_br" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:44.887 Cannot find device "nvmf_init_if" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:44.887 Cannot find device "nvmf_init_if2" 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:44.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:44.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:44.887 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:45.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:45.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:35:45.175 00:35:45.175 --- 10.0.0.3 ping statistics --- 00:35:45.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.175 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:45.175 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:45.175 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:35:45.175 00:35:45.175 --- 10.0.0.4 ping statistics --- 00:35:45.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.175 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:45.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:35:45.175 00:35:45.175 --- 10.0.0.1 ping statistics --- 00:35:45.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.175 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:45.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:35:45.175 00:35:45.175 --- 10.0.0.2 ping statistics --- 00:35:45.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.175 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.175 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=127143 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 127143 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 127143 ']' 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:45.176 18:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.176 [2024-12-14 18:56:00.833618] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.176 [2024-12-14 18:56:00.834934] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:45.176 [2024-12-14 18:56:00.835005] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.446 [2024-12-14 18:56:00.976584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.446 [2024-12-14 18:56:01.043108] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.446 [2024-12-14 18:56:01.043174] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.446 [2024-12-14 18:56:01.043200] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.446 [2024-12-14 18:56:01.043208] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.446 [2024-12-14 18:56:01.043214] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.446 [2024-12-14 18:56:01.043372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:45.446 [2024-12-14 18:56:01.043640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:35:45.446 [2024-12-14 18:56:01.044300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:45.446 [2024-12-14 18:56:01.044304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.446 [2024-12-14 18:56:01.140245] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.446 [2024-12-14 18:56:01.140745] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.446 [2024-12-14 18:56:01.140941] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.446 [2024-12-14 18:56:01.141188] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.446 [2024-12-14 18:56:01.141215] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.446 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:45.446 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:35:45.446 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:45.446 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.446 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.712 [2024-12-14 18:56:01.221421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.712 Malloc0 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.712 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.713 [2024-12-14 18:56:01.289464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:45.713 { 00:35:45.713 "params": { 00:35:45.713 "name": "Nvme$subsystem", 00:35:45.713 "trtype": "$TEST_TRANSPORT", 00:35:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.713 "adrfam": "ipv4", 00:35:45.713 "trsvcid": "$NVMF_PORT", 00:35:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.713 "hdgst": ${hdgst:-false}, 00:35:45.713 "ddgst": ${ddgst:-false} 00:35:45.713 }, 00:35:45.713 "method": "bdev_nvme_attach_controller" 00:35:45.713 } 00:35:45.713 EOF 00:35:45.713 )") 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:35:45.713 18:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:45.713 "params": { 00:35:45.713 "name": "Nvme1", 00:35:45.713 "trtype": "tcp", 00:35:45.713 "traddr": "10.0.0.3", 00:35:45.713 "adrfam": "ipv4", 00:35:45.713 "trsvcid": "4420", 00:35:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.713 "hdgst": false, 00:35:45.713 "ddgst": false 00:35:45.713 }, 00:35:45.713 "method": "bdev_nvme_attach_controller" 00:35:45.713 }' 00:35:45.713 [2024-12-14 18:56:01.356409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:45.713 [2024-12-14 18:56:01.356509] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127185 ] 00:35:45.971 [2024-12-14 18:56:01.498228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:45.971 [2024-12-14 18:56:01.565838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.971 [2024-12-14 18:56:01.565979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.971 [2024-12-14 18:56:01.565993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.229 I/O targets: 00:35:46.229 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:46.229 00:35:46.229 00:35:46.229 CUnit - A unit testing framework for C - Version 2.1-3 00:35:46.229 http://cunit.sourceforge.net/ 00:35:46.229 00:35:46.229 00:35:46.229 Suite: bdevio tests on: Nvme1n1 00:35:46.229 Test: blockdev write read block ...passed 00:35:46.229 Test: blockdev write zeroes read block ...passed 00:35:46.229 Test: blockdev write zeroes read no split ...passed 00:35:46.229 Test: blockdev write zeroes read split ...passed 00:35:46.229 Test: blockdev write zeroes read split partial ...passed 00:35:46.229 Test: blockdev reset ...[2024-12-14 18:56:01.855261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.229 [2024-12-14 18:56:01.855375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccc320 (9): Bad file descriptor 00:35:46.229 [2024-12-14 18:56:01.859034] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:46.229 passed 00:35:46.229 Test: blockdev write read 8 blocks ...passed 00:35:46.229 Test: blockdev write read size > 128k ...passed 00:35:46.229 Test: blockdev write read invalid size ...passed 00:35:46.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:46.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:46.229 Test: blockdev write read max offset ...passed 00:35:46.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:46.488 Test: blockdev writev readv 8 blocks ...passed 00:35:46.488 Test: blockdev writev readv 30 x 1block ...passed 00:35:46.488 Test: blockdev writev readv block ...passed 00:35:46.488 Test: blockdev writev readv size > 128k ...passed 00:35:46.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:46.488 Test: blockdev comparev and writev ...[2024-12-14 18:56:02.033900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.034054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.034185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.034804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.035011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.035102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.035676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.035803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.035878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.035936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.036542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.036674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.036761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.488 [2024-12-14 18:56:02.036846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:46.488 passed 00:35:46.488 Test: blockdev nvme passthru rw ...passed 00:35:46.488 Test: blockdev nvme passthru vendor specific ...[2024-12-14 18:56:02.120038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.488 [2024-12-14 18:56:02.120206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.120443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.488 [2024-12-14 18:56:02.120531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.120744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.488 [2024-12-14 18:56:02.120847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:46.488 [2024-12-14 18:56:02.121067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.488 [2024-12-14 18:56:02.121168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:46.488 passed 00:35:46.488 Test: blockdev nvme admin passthru ...passed 00:35:46.488 Test: blockdev copy ...passed 00:35:46.488 00:35:46.488 Run Summary: Type Total Ran Passed Failed Inactive 00:35:46.488 suites 1 1 n/a 0 0 00:35:46.488 tests 23 23 23 0 0 00:35:46.488 asserts 152 152 152 0 n/a 00:35:46.488 00:35:46.488 Elapsed time = 0.871 seconds 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:46.746 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.747 rmmod nvme_tcp 00:35:46.747 rmmod nvme_fabrics 00:35:46.747 rmmod nvme_keyring 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 127143 ']' 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 127143 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 127143 ']' 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 127143 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:46.747 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127143 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:35:47.005 killing process with pid 127143 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127143' 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 127143 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 127143 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:47.005 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:47.006 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:47.264 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:35:47.265 00:35:47.265 real 0m2.836s 00:35:47.265 user 0m7.120s 00:35:47.265 sys 0m1.216s 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:47.265 18:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.265 ************************************ 00:35:47.265 END TEST nvmf_bdevio 00:35:47.265 ************************************ 00:35:47.265 18:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:47.265 00:35:47.265 real 3m36.091s 00:35:47.265 user 9m35.201s 00:35:47.265 sys 1m19.907s 00:35:47.265 18:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:47.265 18:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:47.265 ************************************ 00:35:47.265 END TEST nvmf_target_core_interrupt_mode 00:35:47.265 ************************************ 00:35:47.524 18:56:03 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:47.524 18:56:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:47.524 18:56:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:47.524 18:56:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:47.524 ************************************ 00:35:47.524 START TEST nvmf_interrupt 00:35:47.524 ************************************ 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:47.524 * Looking for test storage... 00:35:47.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:47.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.524 --rc genhtml_branch_coverage=1 00:35:47.524 --rc genhtml_function_coverage=1 00:35:47.524 --rc genhtml_legend=1 00:35:47.524 --rc geninfo_all_blocks=1 00:35:47.524 --rc geninfo_unexecuted_blocks=1 00:35:47.524 00:35:47.524 ' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:47.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.524 --rc genhtml_branch_coverage=1 00:35:47.524 --rc genhtml_function_coverage=1 00:35:47.524 --rc genhtml_legend=1 00:35:47.524 --rc geninfo_all_blocks=1 00:35:47.524 --rc geninfo_unexecuted_blocks=1 00:35:47.524 00:35:47.524 ' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:47.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.524 --rc genhtml_branch_coverage=1 00:35:47.524 --rc genhtml_function_coverage=1 00:35:47.524 --rc genhtml_legend=1 00:35:47.524 --rc geninfo_all_blocks=1 00:35:47.524 --rc geninfo_unexecuted_blocks=1 00:35:47.524 00:35:47.524 ' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:47.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.524 --rc genhtml_branch_coverage=1 00:35:47.524 --rc genhtml_function_coverage=1 00:35:47.524 --rc genhtml_legend=1 00:35:47.524 --rc geninfo_all_blocks=1 00:35:47.524 --rc geninfo_unexecuted_blocks=1 00:35:47.524 00:35:47.524 ' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:47.524 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.783 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.783 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.783 18:56:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:47.784 Cannot find device "nvmf_init_br" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:47.784 Cannot find device "nvmf_init_br2" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:47.784 Cannot find device "nvmf_tgt_br" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:47.784 Cannot find device "nvmf_tgt_br2" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:47.784 Cannot find device "nvmf_init_br" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:47.784 Cannot find device "nvmf_init_br2" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:47.784 Cannot find device "nvmf_tgt_br" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:47.784 Cannot find device "nvmf_tgt_br2" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:47.784 Cannot find device "nvmf_br" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:47.784 Cannot find device "nvmf_init_if" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:47.784 Cannot find device "nvmf_init_if2" 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:47.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:47.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:47.784 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:48.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:48.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:35:48.044 00:35:48.044 --- 10.0.0.3 ping statistics --- 00:35:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.044 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:48.044 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:48.044 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:35:48.044 00:35:48.044 --- 10.0.0.4 ping statistics --- 00:35:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.044 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:48.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:48.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:35:48.044 00:35:48.044 --- 10.0.0.1 ping statistics --- 00:35:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.044 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:48.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:48.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:35:48.044 00:35:48.044 --- 10.0.0.2 ping statistics --- 00:35:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.044 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@457 -- # return 0 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:48.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=127428 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 127428 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 127428 ']' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:48.044 18:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:48.044 [2024-12-14 18:56:03.777361] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:48.044 [2024-12-14 18:56:03.778702] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:48.044 [2024-12-14 18:56:03.778785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.303 [2024-12-14 18:56:03.923695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:48.303 [2024-12-14 18:56:03.997858] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.303 [2024-12-14 18:56:03.997934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.303 [2024-12-14 18:56:03.997949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.303 [2024-12-14 18:56:03.997961] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.303 [2024-12-14 18:56:03.997971] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.303 [2024-12-14 18:56:03.998172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.303 [2024-12-14 18:56:03.998189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.562 [2024-12-14 18:56:04.100698] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:48.562 [2024-12-14 18:56:04.101227] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:48.562 [2024-12-14 18:56:04.101317] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:49.130 5000+0 records in 00:35:49.130 5000+0 records out 00:35:49.130 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0264818 s, 387 MB/s 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.130 AIO0 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.130 [2024-12-14 18:56:04.851222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.130 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 [2024-12-14 18:56:04.887691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 127428 0 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 0 idle 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:35:49.389 18:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127428 root 20 0 64.2g 46464 33408 S 0.0 0.4 0:00.31 reactor_0' 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127428 root 20 0 64.2g 46464 33408 S 0.0 0.4 0:00.31 reactor_0 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 127428 1 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 1 idle 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:35:49.389 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127432 root 20 0 64.2g 46464 33408 S 0.0 0.4 0:00.00 reactor_1' 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127432 root 20 0 64.2g 46464 33408 S 0.0 0.4 0:00.00 reactor_1 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=127496 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 127428 0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 127428 0 busy 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:49.647 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127428 root 20 0 64.2g 46976 33664 S 6.2 0.4 0:00.32 reactor_0' 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127428 root 20 0 64.2g 46976 33664 S 6.2 0.4 0:00.32 reactor_0 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:49.905 18:56:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:50.840 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:50.840 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:50.840 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:35:50.840 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127428 root 20 0 64.2g 47744 33792 R 99.9 0.4 0:01.81 reactor_0' 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127428 root 20 0 64.2g 47744 33792 R 99.9 0.4 0:01.81 reactor_0 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 127428 1 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 127428 1 busy 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127432 root 20 0 64.2g 47744 33792 R 66.7 0.4 0:00.86 reactor_1' 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127432 root 20 0 64.2g 47744 33792 R 66.7 0.4 0:00.86 reactor_1 00:35:51.098 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:51.099 18:56:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 127496 00:36:01.075 Initializing NVMe Controllers 00:36:01.075 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:01.075 Controller IO queue size 256, less than required. 00:36:01.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:01.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:01.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:01.075 Initialization complete. Launching workers. 00:36:01.075 ======================================================== 00:36:01.075 Latency(us) 00:36:01.075 Device Information : IOPS MiB/s Average min max 00:36:01.075 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 7816.99 30.54 32796.61 7073.30 73642.77 00:36:01.075 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6916.40 27.02 37071.94 9178.03 74467.48 00:36:01.075 ======================================================== 00:36:01.075 Total : 14733.39 57.55 34803.61 7073.30 74467.48 00:36:01.075 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 127428 0 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 0 idle 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127428 root 20 0 64.2g 47744 33792 S 6.2 0.4 0:13.77 reactor_0' 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127428 root 20 0 64.2g 47744 33792 S 6.2 0.4 0:13.77 reactor_0 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 127428 1 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 1 idle 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:01.075 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127432 root 20 0 64.2g 47744 33792 S 0.0 0.4 0:06.73 reactor_1' 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127432 root 20 0 64.2g 47744 33792 S 0.0 0.4 0:06.73 reactor_1 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:01.076 18:56:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 127428 0 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 0 idle 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:36:02.453 18:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127428 root 20 0 64.2g 49792 33792 S 6.7 0.4 0:13.84 reactor_0' 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127428 root 20 0 64.2g 49792 33792 S 6.7 0.4 0:13.84 reactor_0 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 127428 1 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 127428 1 idle 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=127428 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 127428 -w 256 00:36:02.453 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 127432 root 20 0 64.2g 49792 33792 S 0.0 0.4 0:06.74 reactor_1' 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 127432 root 20 0 64.2g 49792 33792 S 0.0 0.4 0:06.74 reactor_1 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:02.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:02.713 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.972 rmmod nvme_tcp 00:36:02.972 rmmod nvme_fabrics 00:36:02.972 rmmod nvme_keyring 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 127428 ']' 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 127428 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 127428 ']' 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 127428 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.972 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127428 00:36:03.231 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:03.231 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:03.231 killing process with pid 127428 00:36:03.231 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127428' 00:36:03.231 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 127428 00:36:03.231 18:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 127428 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:03.490 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:36:03.749 00:36:03.749 real 0m16.318s 00:36:03.749 user 0m27.743s 00:36:03.749 sys 0m8.175s 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.749 ************************************ 00:36:03.749 END TEST nvmf_interrupt 00:36:03.749 ************************************ 00:36:03.749 18:56:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:03.749 ************************************ 00:36:03.749 END TEST nvmf_tcp 00:36:03.749 ************************************ 00:36:03.749 00:36:03.749 real 28m0.661s 00:36:03.749 user 82m1.673s 00:36:03.749 sys 6m16.569s 00:36:03.749 18:56:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.749 18:56:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.749 18:56:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:03.749 18:56:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:03.749 18:56:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:03.749 18:56:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.749 18:56:19 -- common/autotest_common.sh@10 -- # set +x 00:36:03.749 ************************************ 00:36:03.749 START TEST spdkcli_nvmf_tcp 00:36:03.749 ************************************ 00:36:03.749 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:04.009 * Looking for test storage... 00:36:04.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.009 --rc genhtml_branch_coverage=1 00:36:04.009 --rc genhtml_function_coverage=1 00:36:04.009 --rc genhtml_legend=1 00:36:04.009 --rc geninfo_all_blocks=1 00:36:04.009 --rc geninfo_unexecuted_blocks=1 00:36:04.009 00:36:04.009 ' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.009 --rc genhtml_branch_coverage=1 00:36:04.009 --rc genhtml_function_coverage=1 00:36:04.009 --rc genhtml_legend=1 00:36:04.009 --rc geninfo_all_blocks=1 00:36:04.009 --rc geninfo_unexecuted_blocks=1 00:36:04.009 00:36:04.009 ' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.009 --rc genhtml_branch_coverage=1 00:36:04.009 --rc genhtml_function_coverage=1 00:36:04.009 --rc genhtml_legend=1 00:36:04.009 --rc geninfo_all_blocks=1 00:36:04.009 --rc geninfo_unexecuted_blocks=1 00:36:04.009 00:36:04.009 ' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.009 --rc genhtml_branch_coverage=1 00:36:04.009 --rc genhtml_function_coverage=1 00:36:04.009 --rc genhtml_legend=1 00:36:04.009 --rc geninfo_all_blocks=1 00:36:04.009 --rc geninfo_unexecuted_blocks=1 00:36:04.009 00:36:04.009 ' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.009 18:56:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=127834 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 127834 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 127834 ']' 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.010 18:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.268 [2024-12-14 18:56:19.782188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:04.269 [2024-12-14 18:56:19.782284] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127834 ] 00:36:04.269 [2024-12-14 18:56:19.922565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:04.269 [2024-12-14 18:56:20.014036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.269 [2024-12-14 18:56:20.014052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.205 18:56:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:05.205 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:05.205 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:05.205 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:05.205 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:05.205 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:05.205 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:05.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:05.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:05.205 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:05.205 ' 00:36:08.493 [2024-12-14 18:56:23.648385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.436 [2024-12-14 18:56:24.973251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:11.971 [2024-12-14 18:56:27.426511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:13.874 [2024-12-14 18:56:29.539757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:15.776 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:15.776 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:15.776 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:15.776 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:15.776 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:15.776 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:15.776 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:15.776 18:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:36:16.035 18:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:16.035 18:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:16.035 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:16.035 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.035 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.294 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:16.294 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.294 18:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.294 18:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:16.294 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:16.294 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:16.294 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:16.294 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:16.294 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:16.294 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:16.294 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:16.294 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:16.294 ' 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:21.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:21.591 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:21.591 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:21.591 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 127834 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 127834 ']' 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 127834 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127834 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127834' 00:36:21.850 killing process with pid 127834 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 127834 00:36:21.850 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 127834 00:36:22.108 Process with pid 127834 is not found 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 127834 ']' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 127834 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 127834 ']' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 127834 00:36:22.108 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (127834) - No such process 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 127834 is not found' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:22.108 ************************************ 00:36:22.108 00:36:22.108 real 0m18.327s 00:36:22.108 user 0m39.766s 00:36:22.108 sys 0m1.014s 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.108 18:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:22.108 END TEST spdkcli_nvmf_tcp 00:36:22.108 ************************************ 00:36:22.108 18:56:37 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:22.368 18:56:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:22.368 18:56:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.368 18:56:37 -- common/autotest_common.sh@10 -- # set +x 00:36:22.368 ************************************ 00:36:22.368 START TEST nvmf_identify_passthru 00:36:22.368 ************************************ 00:36:22.368 18:56:37 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:22.368 * Looking for test storage... 00:36:22.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:22.368 18:56:37 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:22.368 18:56:37 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:22.368 18:56:37 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.368 --rc genhtml_branch_coverage=1 00:36:22.368 --rc genhtml_function_coverage=1 00:36:22.368 --rc genhtml_legend=1 00:36:22.368 --rc geninfo_all_blocks=1 00:36:22.368 --rc geninfo_unexecuted_blocks=1 00:36:22.368 00:36:22.368 ' 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.368 --rc genhtml_branch_coverage=1 00:36:22.368 --rc genhtml_function_coverage=1 00:36:22.368 --rc genhtml_legend=1 00:36:22.368 --rc geninfo_all_blocks=1 00:36:22.368 --rc geninfo_unexecuted_blocks=1 00:36:22.368 00:36:22.368 ' 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.368 --rc genhtml_branch_coverage=1 00:36:22.368 --rc genhtml_function_coverage=1 00:36:22.368 --rc genhtml_legend=1 00:36:22.368 --rc geninfo_all_blocks=1 00:36:22.368 --rc geninfo_unexecuted_blocks=1 00:36:22.368 00:36:22.368 ' 00:36:22.368 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.368 --rc genhtml_branch_coverage=1 00:36:22.368 --rc genhtml_function_coverage=1 00:36:22.368 --rc genhtml_legend=1 00:36:22.368 --rc geninfo_all_blocks=1 00:36:22.368 --rc geninfo_unexecuted_blocks=1 00:36:22.368 00:36:22.368 ' 00:36:22.368 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.368 18:56:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.368 18:56:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.368 18:56:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.368 18:56:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.368 18:56:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:22.368 18:56:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.368 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:22.369 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.369 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:22.369 18:56:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.369 18:56:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.369 18:56:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.369 18:56:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.369 18:56:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.369 18:56:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:22.369 18:56:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.369 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.369 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:22.369 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:22.369 Cannot find device "nvmf_init_br" 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:22.369 Cannot find device "nvmf_init_br2" 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:36:22.369 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:22.628 Cannot find device "nvmf_tgt_br" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:22.628 Cannot find device "nvmf_tgt_br2" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:22.628 Cannot find device "nvmf_init_br" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:22.628 Cannot find device "nvmf_init_br2" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:22.628 Cannot find device "nvmf_tgt_br" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:22.628 Cannot find device "nvmf_tgt_br2" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:22.628 Cannot find device "nvmf_br" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:22.628 Cannot find device "nvmf_init_if" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:22.628 Cannot find device "nvmf_init_if2" 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:22.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:22.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:22.628 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:22.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:22.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:36:22.887 00:36:22.887 --- 10.0.0.3 ping statistics --- 00:36:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.887 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:22.887 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:22.887 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:36:22.887 00:36:22.887 --- 10.0.0.4 ping statistics --- 00:36:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.887 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:22.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:36:22.887 00:36:22.887 --- 10.0.0.1 ping statistics --- 00:36:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.887 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:22.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:36:22.887 00:36:22.887 --- 10.0.0.2 ping statistics --- 00:36:22.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.887 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@457 -- # return 0 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:22.887 18:56:38 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:22.887 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:22.887 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:23.146 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:36:23.146 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:23.146 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:23.146 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:23.404 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:36:23.404 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:23.404 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:23.404 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:23.404 18:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:23.404 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:23.404 18:56:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:23.404 18:56:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=128347 00:36:23.404 18:56:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:23.404 18:56:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:23.404 18:56:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 128347 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 128347 ']' 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:23.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:23.404 18:56:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:23.404 [2024-12-14 18:56:39.069948] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:23.404 [2024-12-14 18:56:39.070044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.663 [2024-12-14 18:56:39.213920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:23.663 [2024-12-14 18:56:39.288741] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.663 [2024-12-14 18:56:39.288805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.663 [2024-12-14 18:56:39.288828] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.663 [2024-12-14 18:56:39.288845] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.664 [2024-12-14 18:56:39.288860] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.664 [2024-12-14 18:56:39.289049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.664 [2024-12-14 18:56:39.289202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:23.664 [2024-12-14 18:56:39.289888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:23.664 [2024-12-14 18:56:39.289940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 [2024-12-14 18:56:40.130348] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 [2024-12-14 18:56:40.140479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 Nvme0n1 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 [2024-12-14 18:56:40.270906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.600 [ 00:36:24.600 { 00:36:24.600 "allow_any_host": true, 00:36:24.600 "hosts": [], 00:36:24.600 "listen_addresses": [], 00:36:24.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:24.600 "subtype": "Discovery" 00:36:24.600 }, 00:36:24.600 { 00:36:24.600 "allow_any_host": true, 00:36:24.600 "hosts": [], 00:36:24.600 "listen_addresses": [ 00:36:24.600 { 00:36:24.600 "adrfam": "IPv4", 00:36:24.600 "traddr": "10.0.0.3", 00:36:24.600 "trsvcid": "4420", 00:36:24.600 "trtype": "TCP" 00:36:24.600 } 00:36:24.600 ], 00:36:24.600 "max_cntlid": 65519, 00:36:24.600 "max_namespaces": 1, 00:36:24.600 "min_cntlid": 1, 00:36:24.600 "model_number": "SPDK bdev Controller", 00:36:24.600 "namespaces": [ 00:36:24.600 { 00:36:24.600 "bdev_name": "Nvme0n1", 00:36:24.600 "name": "Nvme0n1", 00:36:24.600 "nguid": "6B02A3E20D694F219DB4CFEE452D1B49", 00:36:24.600 "nsid": 1, 00:36:24.600 "uuid": "6b02a3e2-0d69-4f21-9db4-cfee452d1b49" 00:36:24.600 } 00:36:24.600 ], 00:36:24.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.600 "serial_number": "SPDK00000000000001", 00:36:24.600 "subtype": "NVMe" 00:36:24.600 } 00:36:24.600 ] 00:36:24.600 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:24.600 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:24.859 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:36:24.859 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:24.859 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:24.859 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:25.118 18:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.118 rmmod nvme_tcp 00:36:25.118 rmmod nvme_fabrics 00:36:25.118 rmmod nvme_keyring 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 128347 ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 128347 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 128347 ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 128347 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:25.118 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128347 00:36:25.378 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:25.378 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:25.378 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128347' 00:36:25.378 killing process with pid 128347 00:36:25.378 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 128347 00:36:25.378 18:56:40 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 128347 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:25.378 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:25.636 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.637 18:56:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.637 18:56:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.637 18:56:41 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:36:25.637 00:36:25.637 real 0m3.493s 00:36:25.637 user 0m7.776s 00:36:25.637 sys 0m0.981s 00:36:25.637 18:56:41 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:25.637 ************************************ 00:36:25.637 END TEST nvmf_identify_passthru 00:36:25.637 ************************************ 00:36:25.637 18:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.896 18:56:41 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:25.896 18:56:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:25.896 18:56:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:25.896 18:56:41 -- common/autotest_common.sh@10 -- # set +x 00:36:25.896 ************************************ 00:36:25.896 START TEST nvmf_dif 00:36:25.896 ************************************ 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:25.896 * Looking for test storage... 00:36:25.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.896 18:56:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:25.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.896 --rc genhtml_branch_coverage=1 00:36:25.896 --rc genhtml_function_coverage=1 00:36:25.896 --rc genhtml_legend=1 00:36:25.896 --rc geninfo_all_blocks=1 00:36:25.896 --rc geninfo_unexecuted_blocks=1 00:36:25.896 00:36:25.896 ' 00:36:25.896 18:56:41 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:25.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.896 --rc genhtml_branch_coverage=1 00:36:25.896 --rc genhtml_function_coverage=1 00:36:25.896 --rc genhtml_legend=1 00:36:25.896 --rc geninfo_all_blocks=1 00:36:25.896 --rc geninfo_unexecuted_blocks=1 00:36:25.896 00:36:25.896 ' 00:36:25.897 18:56:41 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:25.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.897 --rc genhtml_branch_coverage=1 00:36:25.897 --rc genhtml_function_coverage=1 00:36:25.897 --rc genhtml_legend=1 00:36:25.897 --rc geninfo_all_blocks=1 00:36:25.897 --rc geninfo_unexecuted_blocks=1 00:36:25.897 00:36:25.897 ' 00:36:25.897 18:56:41 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:25.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.897 --rc genhtml_branch_coverage=1 00:36:25.897 --rc genhtml_function_coverage=1 00:36:25.897 --rc genhtml_legend=1 00:36:25.897 --rc geninfo_all_blocks=1 00:36:25.897 --rc geninfo_unexecuted_blocks=1 00:36:25.897 00:36:25.897 ' 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:25.897 18:56:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.897 18:56:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.897 18:56:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.897 18:56:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.897 18:56:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.897 18:56:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.897 18:56:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.897 18:56:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:25.897 18:56:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:25.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:25.897 18:56:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.897 18:56:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.897 18:56:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:25.897 Cannot find device "nvmf_init_br" 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@162 -- # true 00:36:25.897 18:56:41 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:26.156 Cannot find device "nvmf_init_br2" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@163 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:26.156 Cannot find device "nvmf_tgt_br" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@164 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:26.156 Cannot find device "nvmf_tgt_br2" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@165 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:26.156 Cannot find device "nvmf_init_br" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@166 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:26.156 Cannot find device "nvmf_init_br2" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@167 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:26.156 Cannot find device "nvmf_tgt_br" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@168 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:26.156 Cannot find device "nvmf_tgt_br2" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@169 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:26.156 Cannot find device "nvmf_br" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@170 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:26.156 Cannot find device "nvmf_init_if" 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@171 -- # true 00:36:26.156 18:56:41 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:26.157 Cannot find device "nvmf_init_if2" 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@172 -- # true 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:26.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@173 -- # true 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:26.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@174 -- # true 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:26.157 18:56:41 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:26.416 18:56:41 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:26.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:26.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:36:26.416 00:36:26.416 --- 10.0.0.3 ping statistics --- 00:36:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.416 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:26.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:26.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:36:26.416 00:36:26.416 --- 10.0.0.4 ping statistics --- 00:36:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.416 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:26.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:36:26.416 00:36:26.416 --- 10.0.0.1 ping statistics --- 00:36:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.416 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:26.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:36:26.416 00:36:26.416 --- 10.0.0.2 ping statistics --- 00:36:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.416 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:36:26.416 18:56:42 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:26.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:26.675 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:26.675 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:26.675 18:56:42 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.675 18:56:42 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:26.675 18:56:42 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:26.675 18:56:42 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.676 18:56:42 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:26.676 18:56:42 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:26.935 18:56:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:26.935 18:56:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:26.935 18:56:42 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:26.935 18:56:42 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=128745 00:36:26.935 18:56:42 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 128745 00:36:26.935 18:56:42 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 128745 ']' 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:26.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:26.935 18:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:26.935 [2024-12-14 18:56:42.524450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:26.935 [2024-12-14 18:56:42.524542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:26.935 [2024-12-14 18:56:42.666803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.194 [2024-12-14 18:56:42.746909] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.194 [2024-12-14 18:56:42.746975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.194 [2024-12-14 18:56:42.746990] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.194 [2024-12-14 18:56:42.747000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.194 [2024-12-14 18:56:42.747009] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.194 [2024-12-14 18:56:42.747051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:36:27.194 18:56:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.194 18:56:42 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:27.194 18:56:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:27.194 18:56:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.194 [2024-12-14 18:56:42.936336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.194 18:56:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:27.194 18:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.453 ************************************ 00:36:27.453 START TEST fio_dif_1_default 00:36:27.453 ************************************ 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.453 bdev_null0 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.453 [2024-12-14 18:56:42.984529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:27.453 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:27.454 { 00:36:27.454 "params": { 00:36:27.454 "name": "Nvme$subsystem", 00:36:27.454 "trtype": "$TEST_TRANSPORT", 00:36:27.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.454 "adrfam": "ipv4", 00:36:27.454 "trsvcid": "$NVMF_PORT", 00:36:27.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.454 "hdgst": ${hdgst:-false}, 00:36:27.454 "ddgst": ${ddgst:-false} 00:36:27.454 }, 00:36:27.454 "method": "bdev_nvme_attach_controller" 00:36:27.454 } 00:36:27.454 EOF 00:36:27.454 )") 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:36:27.454 18:56:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:27.454 "params": { 00:36:27.454 "name": "Nvme0", 00:36:27.454 "trtype": "tcp", 00:36:27.454 "traddr": "10.0.0.3", 00:36:27.454 "adrfam": "ipv4", 00:36:27.454 "trsvcid": "4420", 00:36:27.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.454 "hdgst": false, 00:36:27.454 "ddgst": false 00:36:27.454 }, 00:36:27.454 "method": "bdev_nvme_attach_controller" 00:36:27.454 }' 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:27.454 18:56:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.713 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:27.713 fio-3.35 00:36:27.713 Starting 1 thread 00:36:39.917 00:36:39.917 filename0: (groupid=0, jobs=1): err= 0: pid=128816: Sat Dec 14 18:56:53 2024 00:36:39.917 read: IOPS=1893, BW=7573KiB/s (7755kB/s)(74.1MiB/10016msec) 00:36:39.917 slat (nsec): min=5821, max=40473, avg=6771.75, stdev=1719.47 00:36:39.917 clat (usec): min=353, max=42419, avg=2092.35, stdev=8116.40 00:36:39.917 lat (usec): min=359, max=42427, avg=2099.12, stdev=8116.47 00:36:39.917 clat percentiles (usec): 00:36:39.917 | 1.00th=[ 363], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:36:39.917 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 396], 00:36:39.917 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 529], 00:36:39.917 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:36:39.917 | 99.99th=[42206] 00:36:39.917 bw ( KiB/s): min= 5440, max=10336, per=100.00%, avg=7583.35, stdev=1459.90, samples=20 00:36:39.917 iops : min= 1360, max= 2584, avg=1895.80, stdev=365.00, samples=20 00:36:39.917 lat (usec) : 500=94.29%, 750=1.47%, 1000=0.04% 00:36:39.917 lat (msec) : 4=0.02%, 50=4.18% 00:36:39.917 cpu : usr=89.87%, sys=9.11%, ctx=27, majf=0, minf=0 00:36:39.917 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.917 issued rwts: total=18964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.917 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:39.917 00:36:39.917 Run status group 0 (all jobs): 00:36:39.917 READ: bw=7573KiB/s (7755kB/s), 7573KiB/s-7573KiB/s (7755kB/s-7755kB/s), io=74.1MiB (77.7MB), run=10016-10016msec 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.917 18:56:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:39.917 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 00:36:39.918 real 0m11.062s 00:36:39.918 user 0m9.674s 00:36:39.918 sys 0m1.202s 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 ************************************ 00:36:39.918 END TEST fio_dif_1_default 00:36:39.918 ************************************ 00:36:39.918 18:56:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:39.918 18:56:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:39.918 18:56:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 ************************************ 00:36:39.918 START TEST fio_dif_1_multi_subsystems 00:36:39.918 ************************************ 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 bdev_null0 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 [2024-12-14 18:56:54.101072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 bdev_null1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:39.918 { 00:36:39.918 "params": { 00:36:39.918 "name": "Nvme$subsystem", 00:36:39.918 "trtype": "$TEST_TRANSPORT", 00:36:39.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.918 "adrfam": "ipv4", 00:36:39.918 "trsvcid": "$NVMF_PORT", 00:36:39.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.918 "hdgst": ${hdgst:-false}, 00:36:39.918 "ddgst": ${ddgst:-false} 00:36:39.918 }, 00:36:39.918 "method": "bdev_nvme_attach_controller" 00:36:39.918 } 00:36:39.918 EOF 00:36:39.918 )") 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.918 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:39.919 { 00:36:39.919 "params": { 00:36:39.919 "name": "Nvme$subsystem", 00:36:39.919 "trtype": "$TEST_TRANSPORT", 00:36:39.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.919 "adrfam": "ipv4", 00:36:39.919 "trsvcid": "$NVMF_PORT", 00:36:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.919 "hdgst": ${hdgst:-false}, 00:36:39.919 "ddgst": ${ddgst:-false} 00:36:39.919 }, 00:36:39.919 "method": "bdev_nvme_attach_controller" 00:36:39.919 } 00:36:39.919 EOF 00:36:39.919 )") 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:39.919 "params": { 00:36:39.919 "name": "Nvme0", 00:36:39.919 "trtype": "tcp", 00:36:39.919 "traddr": "10.0.0.3", 00:36:39.919 "adrfam": "ipv4", 00:36:39.919 "trsvcid": "4420", 00:36:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:39.919 "hdgst": false, 00:36:39.919 "ddgst": false 00:36:39.919 }, 00:36:39.919 "method": "bdev_nvme_attach_controller" 00:36:39.919 },{ 00:36:39.919 "params": { 00:36:39.919 "name": "Nvme1", 00:36:39.919 "trtype": "tcp", 00:36:39.919 "traddr": "10.0.0.3", 00:36:39.919 "adrfam": "ipv4", 00:36:39.919 "trsvcid": "4420", 00:36:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.919 "hdgst": false, 00:36:39.919 "ddgst": false 00:36:39.919 }, 00:36:39.919 "method": "bdev_nvme_attach_controller" 00:36:39.919 }' 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:39.919 18:56:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.919 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:39.919 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:39.919 fio-3.35 00:36:39.919 Starting 2 threads 00:36:49.895 00:36:49.895 filename0: (groupid=0, jobs=1): err= 0: pid=128972: Sat Dec 14 18:57:05 2024 00:36:49.895 read: IOPS=288, BW=1153KiB/s (1180kB/s)(11.3MiB/10021msec) 00:36:49.895 slat (nsec): min=5900, max=46671, avg=8108.07, stdev=3788.48 00:36:49.895 clat (usec): min=365, max=41684, avg=13854.11, stdev=19035.30 00:36:49.895 lat (usec): min=371, max=41697, avg=13862.21, stdev=19035.39 00:36:49.895 clat percentiles (usec): 00:36:49.895 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:36:49.895 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 611], 00:36:49.895 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:49.895 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:49.895 | 99.99th=[41681] 00:36:49.895 bw ( KiB/s): min= 576, max= 1920, per=48.91%, avg=1153.60, stdev=364.63, samples=20 00:36:49.895 iops : min= 144, max= 480, avg=288.40, stdev=91.16, samples=20 00:36:49.895 lat (usec) : 500=58.31%, 750=5.92%, 1000=1.70% 00:36:49.895 lat (msec) : 2=0.97%, 50=33.10% 00:36:49.895 cpu : usr=95.81%, sys=3.72%, ctx=20, majf=0, minf=0 00:36:49.895 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.895 issued rwts: total=2888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.895 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:49.895 filename1: (groupid=0, jobs=1): err= 0: pid=128973: Sat Dec 14 18:57:05 2024 00:36:49.895 read: IOPS=301, BW=1206KiB/s (1235kB/s)(11.8MiB/10032msec) 00:36:49.895 slat (nsec): min=5930, max=51164, avg=8151.52, stdev=4095.25 00:36:49.895 clat (usec): min=370, max=41733, avg=13244.54, stdev=18800.26 00:36:49.895 lat (usec): min=377, max=41748, avg=13252.70, stdev=18800.44 00:36:49.895 clat percentiles (usec): 00:36:49.895 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:36:49.895 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 486], 00:36:49.895 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:36:49.895 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:49.895 | 99.99th=[41681] 00:36:49.895 bw ( KiB/s): min= 512, max= 1952, per=51.20%, avg=1208.00, stdev=431.56, samples=20 00:36:49.895 iops : min= 128, max= 488, avg=302.00, stdev=107.89, samples=20 00:36:49.895 lat (usec) : 500=60.78%, 750=5.32%, 1000=1.22% 00:36:49.895 lat (msec) : 2=1.06%, 50=31.61% 00:36:49.895 cpu : usr=95.73%, sys=3.83%, ctx=9, majf=0, minf=0 00:36:49.895 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.895 issued rwts: total=3024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.895 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:49.895 00:36:49.895 Run status group 0 (all jobs): 00:36:49.895 READ: bw=2357KiB/s (2414kB/s), 1153KiB/s-1206KiB/s (1180kB/s-1235kB/s), io=23.1MiB (24.2MB), run=10021-10032msec 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 ************************************ 00:36:49.895 END TEST fio_dif_1_multi_subsystems 00:36:49.895 ************************************ 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 00:36:49.895 real 0m11.230s 00:36:49.895 user 0m20.031s 00:36:49.895 sys 0m1.057s 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:49.895 18:57:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:49.895 18:57:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 ************************************ 00:36:49.895 START TEST fio_dif_rand_params 00:36:49.895 ************************************ 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 bdev_null0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.895 [2024-12-14 18:57:05.383722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:49.895 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:49.896 { 00:36:49.896 "params": { 00:36:49.896 "name": "Nvme$subsystem", 00:36:49.896 "trtype": "$TEST_TRANSPORT", 00:36:49.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.896 "adrfam": "ipv4", 00:36:49.896 "trsvcid": "$NVMF_PORT", 00:36:49.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.896 "hdgst": ${hdgst:-false}, 00:36:49.896 "ddgst": ${ddgst:-false} 00:36:49.896 }, 00:36:49.896 "method": "bdev_nvme_attach_controller" 00:36:49.896 } 00:36:49.896 EOF 00:36:49.896 )") 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:49.896 "params": { 00:36:49.896 "name": "Nvme0", 00:36:49.896 "trtype": "tcp", 00:36:49.896 "traddr": "10.0.0.3", 00:36:49.896 "adrfam": "ipv4", 00:36:49.896 "trsvcid": "4420", 00:36:49.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.896 "hdgst": false, 00:36:49.896 "ddgst": false 00:36:49.896 }, 00:36:49.896 "method": "bdev_nvme_attach_controller" 00:36:49.896 }' 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:49.896 18:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.896 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:49.896 ... 00:36:49.896 fio-3.35 00:36:49.896 Starting 3 threads 00:36:56.465 00:36:56.465 filename0: (groupid=0, jobs=1): err= 0: pid=129124: Sat Dec 14 18:57:11 2024 00:36:56.465 read: IOPS=357, BW=44.7MiB/s (46.9MB/s)(224MiB/5001msec) 00:36:56.465 slat (nsec): min=5958, max=43822, avg=9933.59, stdev=5299.30 00:36:56.465 clat (usec): min=3258, max=53617, avg=8359.70, stdev=4785.39 00:36:56.465 lat (usec): min=3264, max=53624, avg=8369.63, stdev=4786.09 00:36:56.465 clat percentiles (usec): 00:36:56.465 | 1.00th=[ 3556], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3752], 00:36:56.465 | 30.00th=[ 5997], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8717], 00:36:56.465 | 70.00th=[10683], 80.00th=[11863], 90.00th=[12518], 95.00th=[13042], 00:36:56.465 | 99.00th=[14877], 99.50th=[49546], 99.90th=[52691], 99.95th=[53740], 00:36:56.465 | 99.99th=[53740] 00:36:56.465 bw ( KiB/s): min=39168, max=64768, per=42.25%, avg=45188.33, stdev=7848.87, samples=9 00:36:56.465 iops : min= 306, max= 506, avg=353.00, stdev=61.32, samples=9 00:36:56.465 lat (msec) : 4=26.48%, 10=41.06%, 20=31.79%, 50=0.28%, 100=0.39% 00:36:56.465 cpu : usr=93.66%, sys=4.50%, ctx=11, majf=0, minf=9 00:36:56.465 IO depths : 1=24.0%, 2=76.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 issued rwts: total=1790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:56.465 filename0: (groupid=0, jobs=1): err= 0: pid=129125: Sat Dec 14 18:57:11 2024 00:36:56.465 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5009msec) 00:36:56.465 slat (nsec): min=3778, max=46703, avg=14752.27, stdev=7037.39 00:36:56.465 clat (usec): min=4701, max=53092, avg=14006.20, stdev=14204.31 00:36:56.465 lat (usec): min=4712, max=53098, avg=14020.96, stdev=14203.96 00:36:56.465 clat percentiles (usec): 00:36:56.465 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 6849], 00:36:56.465 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:36:56.465 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[47973], 95.00th=[49546], 00:36:56.465 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[53216], 00:36:56.465 | 99.99th=[53216] 00:36:56.465 bw ( KiB/s): min=16896, max=34560, per=25.56%, avg=27340.80, stdev=6315.59, samples=10 00:36:56.465 iops : min= 132, max= 270, avg=213.60, stdev=49.34, samples=10 00:36:56.465 lat (msec) : 10=80.67%, 20=5.32%, 50=10.64%, 100=3.36% 00:36:56.465 cpu : usr=94.97%, sys=3.99%, ctx=7, majf=0, minf=9 00:36:56.465 IO depths : 1=5.9%, 2=94.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:56.465 filename0: (groupid=0, jobs=1): err= 0: pid=129126: Sat Dec 14 18:57:11 2024 00:36:56.465 read: IOPS=265, BW=33.1MiB/s (34.8MB/s)(166MiB/5016msec) 00:36:56.465 slat (nsec): min=6138, max=61154, avg=12743.55, stdev=5643.64 00:36:56.465 clat (usec): min=3724, max=52874, avg=11295.76, stdev=10373.96 00:36:56.465 lat (usec): min=3730, max=52884, avg=11308.50, stdev=10374.07 00:36:56.465 clat percentiles (usec): 00:36:56.465 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6521], 00:36:56.465 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 9110], 60.00th=[ 9896], 00:36:56.465 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11731], 95.00th=[46924], 00:36:56.465 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:36:56.465 | 99.99th=[52691] 00:36:56.465 bw ( KiB/s): min=26112, max=45568, per=31.76%, avg=33971.20, stdev=5568.35, samples=10 00:36:56.465 iops : min= 204, max= 356, avg=265.40, stdev=43.50, samples=10 00:36:56.465 lat (msec) : 4=0.45%, 10=61.73%, 20=30.83%, 50=5.41%, 100=1.58% 00:36:56.465 cpu : usr=94.02%, sys=4.45%, ctx=4, majf=0, minf=9 00:36:56.465 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.465 issued rwts: total=1330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:56.465 00:36:56.465 Run status group 0 (all jobs): 00:36:56.465 READ: bw=104MiB/s (110MB/s), 26.7MiB/s-44.7MiB/s (28.0MB/s-46.9MB/s), io=524MiB (549MB), run=5001-5016msec 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.465 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 bdev_null0 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 [2024-12-14 18:57:11.392106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 bdev_null1 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 bdev_null2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:56.466 { 00:36:56.466 "params": { 00:36:56.466 "name": "Nvme$subsystem", 00:36:56.466 "trtype": "$TEST_TRANSPORT", 00:36:56.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.466 "adrfam": "ipv4", 00:36:56.466 "trsvcid": "$NVMF_PORT", 00:36:56.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.466 "hdgst": ${hdgst:-false}, 00:36:56.466 "ddgst": ${ddgst:-false} 00:36:56.466 }, 00:36:56.466 "method": "bdev_nvme_attach_controller" 00:36:56.466 } 00:36:56.466 EOF 00:36:56.466 )") 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:56.466 { 00:36:56.466 "params": { 00:36:56.466 "name": "Nvme$subsystem", 00:36:56.466 "trtype": "$TEST_TRANSPORT", 00:36:56.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.466 "adrfam": "ipv4", 00:36:56.466 "trsvcid": "$NVMF_PORT", 00:36:56.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.466 "hdgst": ${hdgst:-false}, 00:36:56.466 "ddgst": ${ddgst:-false} 00:36:56.466 }, 00:36:56.466 "method": "bdev_nvme_attach_controller" 00:36:56.466 } 00:36:56.466 EOF 00:36:56.466 )") 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:56.466 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:56.466 { 00:36:56.466 "params": { 00:36:56.466 "name": "Nvme$subsystem", 00:36:56.467 "trtype": "$TEST_TRANSPORT", 00:36:56.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.467 "adrfam": "ipv4", 00:36:56.467 "trsvcid": "$NVMF_PORT", 00:36:56.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.467 "hdgst": ${hdgst:-false}, 00:36:56.467 "ddgst": ${ddgst:-false} 00:36:56.467 }, 00:36:56.467 "method": "bdev_nvme_attach_controller" 00:36:56.467 } 00:36:56.467 EOF 00:36:56.467 )") 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:56.467 "params": { 00:36:56.467 "name": "Nvme0", 00:36:56.467 "trtype": "tcp", 00:36:56.467 "traddr": "10.0.0.3", 00:36:56.467 "adrfam": "ipv4", 00:36:56.467 "trsvcid": "4420", 00:36:56.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.467 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.467 "hdgst": false, 00:36:56.467 "ddgst": false 00:36:56.467 }, 00:36:56.467 "method": "bdev_nvme_attach_controller" 00:36:56.467 },{ 00:36:56.467 "params": { 00:36:56.467 "name": "Nvme1", 00:36:56.467 "trtype": "tcp", 00:36:56.467 "traddr": "10.0.0.3", 00:36:56.467 "adrfam": "ipv4", 00:36:56.467 "trsvcid": "4420", 00:36:56.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:56.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:56.467 "hdgst": false, 00:36:56.467 "ddgst": false 00:36:56.467 }, 00:36:56.467 "method": "bdev_nvme_attach_controller" 00:36:56.467 },{ 00:36:56.467 "params": { 00:36:56.467 "name": "Nvme2", 00:36:56.467 "trtype": "tcp", 00:36:56.467 "traddr": "10.0.0.3", 00:36:56.467 "adrfam": "ipv4", 00:36:56.467 "trsvcid": "4420", 00:36:56.467 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:56.467 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:56.467 "hdgst": false, 00:36:56.467 "ddgst": false 00:36:56.467 }, 00:36:56.467 "method": "bdev_nvme_attach_controller" 00:36:56.467 }' 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:56.467 18:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.467 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:56.467 ... 00:36:56.467 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:56.467 ... 00:36:56.467 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:56.467 ... 00:36:56.467 fio-3.35 00:36:56.467 Starting 24 threads 00:37:14.560 00:37:14.560 filename0: (groupid=0, jobs=1): err= 0: pid=129220: Sat Dec 14 18:57:28 2024 00:37:14.560 read: IOPS=940, BW=3761KiB/s (3851kB/s)(36.7MiB/10002msec) 00:37:14.560 slat (usec): min=3, max=8018, avg=21.00, stdev=117.11 00:37:14.560 clat (msec): min=2, max=202, avg=16.84, stdev=20.71 00:37:14.560 lat (msec): min=2, max=202, avg=16.86, stdev=20.71 00:37:14.560 clat percentiles (msec): 00:37:14.560 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:37:14.560 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:14.560 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 39], 95.00th=[ 72], 00:37:14.560 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 169], 99.95th=[ 203], 00:37:14.560 | 99.99th=[ 203] 00:37:14.560 bw ( KiB/s): min= 608, max= 6284, per=7.15%, avg=3638.37, stdev=2610.18, samples=19 00:37:14.560 iops : min= 152, max= 1571, avg=909.58, stdev=652.55, samples=19 00:37:14.560 lat (msec) : 4=0.51%, 10=23.81%, 20=65.24%, 50=1.31%, 100=8.34% 00:37:14.560 lat (msec) : 250=0.80% 00:37:14.560 cpu : usr=69.56%, sys=1.35%, ctx=684, majf=0, minf=9 00:37:14.560 IO depths : 1=5.7%, 2=11.5%, 4=23.6%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:14.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.560 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.560 issued rwts: total=9404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.560 filename0: (groupid=0, jobs=1): err= 0: pid=129221: Sat Dec 14 18:57:28 2024 00:37:14.560 read: IOPS=951, BW=3806KiB/s (3898kB/s)(37.2MiB/10001msec) 00:37:14.560 slat (usec): min=3, max=8017, avg=28.03, stdev=129.93 00:37:14.560 clat (usec): min=1338, max=194200, avg=16560.83, stdev=20178.15 00:37:14.560 lat (usec): min=1344, max=194211, avg=16588.86, stdev=20179.44 00:37:14.560 clat percentiles (msec): 00:37:14.560 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:37:14.560 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:14.560 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 39], 95.00th=[ 69], 00:37:14.560 | 99.00th=[ 95], 99.50th=[ 100], 99.90th=[ 194], 99.95th=[ 194], 00:37:14.560 | 99.99th=[ 194] 00:37:14.560 bw ( KiB/s): min= 512, max= 6400, per=7.21%, avg=3665.32, stdev=2598.85, samples=19 00:37:14.560 iops : min= 128, max= 1600, avg=916.32, stdev=649.72, samples=19 00:37:14.560 lat (msec) : 2=0.34%, 4=0.67%, 10=38.35%, 20=49.91%, 50=1.99% 00:37:14.560 lat (msec) : 100=8.27%, 250=0.47% 00:37:14.560 cpu : usr=68.43%, sys=1.39%, ctx=485, majf=0, minf=9 00:37:14.560 IO depths : 1=5.7%, 2=11.6%, 4=23.7%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:14.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.560 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.560 issued rwts: total=9517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.560 filename0: (groupid=0, jobs=1): err= 0: pid=129222: Sat Dec 14 18:57:28 2024 00:37:14.560 read: IOPS=845, BW=3381KiB/s (3462kB/s)(33.0MiB/10007msec) 00:37:14.561 slat (usec): min=3, max=8032, avg=16.18, stdev=131.03 00:37:14.561 clat (msec): min=2, max=116, avg=18.79, stdev=20.66 00:37:14.561 lat (msec): min=2, max=116, avg=18.80, stdev=20.66 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:37:14.561 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:37:14.561 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 58], 95.00th=[ 72], 00:37:14.561 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 116], 99.95th=[ 116], 00:37:14.561 | 99.99th=[ 117] 00:37:14.561 bw ( KiB/s): min= 848, max= 6144, per=6.37%, avg=3238.53, stdev=2373.28, samples=19 00:37:14.561 iops : min= 212, max= 1536, avg=809.58, stdev=593.31, samples=19 00:37:14.561 lat (msec) : 4=0.22%, 10=18.22%, 20=67.40%, 50=3.32%, 100=10.06% 00:37:14.561 lat (msec) : 250=0.78% 00:37:14.561 cpu : usr=59.51%, sys=1.20%, ctx=711, majf=0, minf=9 00:37:14.561 IO depths : 1=4.6%, 2=9.2%, 4=19.8%, 8=58.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=8459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename0: (groupid=0, jobs=1): err= 0: pid=129223: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=336, BW=1346KiB/s (1378kB/s)(13.2MiB/10038msec) 00:37:14.561 slat (usec): min=4, max=8030, avg=19.13, stdev=190.55 00:37:14.561 clat (msec): min=6, max=139, avg=47.38, stdev=18.97 00:37:14.561 lat (msec): min=6, max=139, avg=47.40, stdev=18.96 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 32], 00:37:14.561 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 48], 00:37:14.561 | 70.00th=[ 56], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 84], 00:37:14.561 | 99.00th=[ 108], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 140], 00:37:14.561 | 99.99th=[ 140] 00:37:14.561 bw ( KiB/s): min= 896, max= 2096, per=2.65%, avg=1346.00, stdev=352.93, samples=20 00:37:14.561 iops : min= 224, max= 524, avg=336.40, stdev=88.19, samples=20 00:37:14.561 lat (msec) : 10=1.04%, 20=0.59%, 50=61.47%, 100=35.15%, 250=1.75% 00:37:14.561 cpu : usr=43.06%, sys=0.88%, ctx=1442, majf=0, minf=0 00:37:14.561 IO depths : 1=1.5%, 2=3.3%, 4=10.2%, 8=72.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=3377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename0: (groupid=0, jobs=1): err= 0: pid=129224: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=304, BW=1220KiB/s (1249kB/s)(12.0MiB/10034msec) 00:37:14.561 slat (usec): min=6, max=3947, avg=12.63, stdev=71.52 00:37:14.561 clat (msec): min=21, max=111, avg=52.32, stdev=17.51 00:37:14.561 lat (msec): min=21, max=111, avg=52.34, stdev=17.51 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 37], 00:37:14.561 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 55], 00:37:14.561 | 70.00th=[ 60], 80.00th=[ 65], 90.00th=[ 79], 95.00th=[ 85], 00:37:14.561 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 111], 00:37:14.561 | 99.99th=[ 112] 00:37:14.561 bw ( KiB/s): min= 896, max= 1560, per=2.39%, avg=1216.70, stdev=228.24, samples=20 00:37:14.561 iops : min= 224, max= 390, avg=304.10, stdev=57.07, samples=20 00:37:14.561 lat (msec) : 50=54.90%, 100=43.56%, 250=1.54% 00:37:14.561 cpu : usr=36.41%, sys=0.80%, ctx=1069, majf=0, minf=9 00:37:14.561 IO depths : 1=1.3%, 2=3.0%, 4=10.4%, 8=73.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=3060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename0: (groupid=0, jobs=1): err= 0: pid=129225: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=339, BW=1357KiB/s (1389kB/s)(13.3MiB/10037msec) 00:37:14.561 slat (usec): min=4, max=4031, avg=14.71, stdev=99.72 00:37:14.561 clat (msec): min=15, max=124, avg=47.03, stdev=18.75 00:37:14.561 lat (msec): min=15, max=124, avg=47.04, stdev=18.76 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 31], 00:37:14.561 | 30.00th=[ 35], 40.00th=[ 40], 50.00th=[ 45], 60.00th=[ 48], 00:37:14.561 | 70.00th=[ 56], 80.00th=[ 62], 90.00th=[ 73], 95.00th=[ 84], 00:37:14.561 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 125], 99.95th=[ 125], 00:37:14.561 | 99.99th=[ 125] 00:37:14.561 bw ( KiB/s): min= 872, max= 1920, per=2.67%, avg=1356.50, stdev=345.23, samples=20 00:37:14.561 iops : min= 218, max= 480, avg=339.10, stdev=86.33, samples=20 00:37:14.561 lat (msec) : 20=1.03%, 50=65.25%, 100=32.73%, 250=1.00% 00:37:14.561 cpu : usr=38.99%, sys=0.79%, ctx=1240, majf=0, minf=9 00:37:14.561 IO depths : 1=0.7%, 2=1.8%, 4=7.8%, 8=76.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=3404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename0: (groupid=0, jobs=1): err= 0: pid=129226: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=940, BW=3763KiB/s (3853kB/s)(36.8MiB/10004msec) 00:37:14.561 slat (usec): min=3, max=5847, avg=20.81, stdev=61.36 00:37:14.561 clat (msec): min=6, max=187, avg=16.85, stdev=20.07 00:37:14.561 lat (msec): min=6, max=187, avg=16.87, stdev=20.06 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:37:14.561 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:14.561 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 47], 95.00th=[ 68], 00:37:14.561 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 188], 99.95th=[ 188], 00:37:14.561 | 99.99th=[ 188] 00:37:14.561 bw ( KiB/s): min= 512, max= 6284, per=7.20%, avg=3663.53, stdev=2583.17, samples=19 00:37:14.561 iops : min= 128, max= 1571, avg=915.84, stdev=645.82, samples=19 00:37:14.561 lat (msec) : 10=32.56%, 20=56.16%, 50=2.48%, 100=7.99%, 250=0.82% 00:37:14.561 cpu : usr=69.86%, sys=1.40%, ctx=888, majf=0, minf=9 00:37:14.561 IO depths : 1=5.8%, 2=11.6%, 4=23.7%, 8=52.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename0: (groupid=0, jobs=1): err= 0: pid=129227: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=940, BW=3763KiB/s (3853kB/s)(36.8MiB/10005msec) 00:37:14.561 slat (usec): min=4, max=4045, avg=27.49, stdev=59.30 00:37:14.561 clat (msec): min=6, max=217, avg=16.77, stdev=20.08 00:37:14.561 lat (msec): min=6, max=217, avg=16.80, stdev=20.08 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:37:14.561 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:14.561 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 41], 95.00th=[ 67], 00:37:14.561 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 186], 99.95th=[ 186], 00:37:14.561 | 99.99th=[ 218] 00:37:14.561 bw ( KiB/s): min= 688, max= 6400, per=7.20%, avg=3662.26, stdev=2587.11, samples=19 00:37:14.561 iops : min= 172, max= 1600, avg=915.53, stdev=646.80, samples=19 00:37:14.561 lat (msec) : 10=37.94%, 20=50.82%, 50=2.07%, 100=8.28%, 250=0.88% 00:37:14.561 cpu : usr=71.85%, sys=1.37%, ctx=705, majf=0, minf=9 00:37:14.561 IO depths : 1=5.8%, 2=11.6%, 4=23.7%, 8=52.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename1: (groupid=0, jobs=1): err= 0: pid=129228: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=308, BW=1232KiB/s (1262kB/s)(12.1MiB/10039msec) 00:37:14.561 slat (usec): min=3, max=8040, avg=22.95, stdev=288.03 00:37:14.561 clat (msec): min=16, max=122, avg=51.78, stdev=17.64 00:37:14.561 lat (msec): min=16, max=122, avg=51.81, stdev=17.64 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 36], 00:37:14.561 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 52], 00:37:14.561 | 70.00th=[ 60], 80.00th=[ 67], 90.00th=[ 75], 95.00th=[ 85], 00:37:14.561 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 124], 99.95th=[ 124], 00:37:14.561 | 99.99th=[ 124] 00:37:14.561 bw ( KiB/s): min= 895, max= 1632, per=2.42%, avg=1228.45, stdev=252.24, samples=20 00:37:14.561 iops : min= 223, max= 408, avg=307.05, stdev=63.10, samples=20 00:37:14.561 lat (msec) : 20=0.52%, 50=58.52%, 100=39.61%, 250=1.36% 00:37:14.561 cpu : usr=32.78%, sys=0.72%, ctx=892, majf=0, minf=9 00:37:14.561 IO depths : 1=1.5%, 2=3.2%, 4=11.3%, 8=72.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=3093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.561 filename1: (groupid=0, jobs=1): err= 0: pid=129229: Sat Dec 14 18:57:28 2024 00:37:14.561 read: IOPS=318, BW=1275KiB/s (1306kB/s)(12.5MiB/10040msec) 00:37:14.561 slat (usec): min=6, max=4029, avg=15.20, stdev=113.97 00:37:14.561 clat (msec): min=18, max=146, avg=50.05, stdev=18.52 00:37:14.561 lat (msec): min=18, max=146, avg=50.06, stdev=18.52 00:37:14.561 clat percentiles (msec): 00:37:14.561 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 36], 00:37:14.561 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 47], 60.00th=[ 50], 00:37:14.561 | 70.00th=[ 55], 80.00th=[ 63], 90.00th=[ 75], 95.00th=[ 89], 00:37:14.561 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 148], 99.95th=[ 148], 00:37:14.561 | 99.99th=[ 148] 00:37:14.561 bw ( KiB/s): min= 768, max= 1792, per=2.50%, avg=1273.55, stdev=272.68, samples=20 00:37:14.561 iops : min= 192, max= 448, avg=318.35, stdev=68.16, samples=20 00:37:14.561 lat (msec) : 20=0.12%, 50=62.48%, 100=35.30%, 250=2.09% 00:37:14.561 cpu : usr=44.90%, sys=0.95%, ctx=1374, majf=0, minf=9 00:37:14.561 IO depths : 1=1.3%, 2=2.8%, 4=10.2%, 8=73.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.561 issued rwts: total=3201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129230: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=293, BW=1172KiB/s (1200kB/s)(11.5MiB/10026msec) 00:37:14.562 slat (usec): min=4, max=8080, avg=23.52, stdev=277.28 00:37:14.562 clat (msec): min=17, max=131, avg=54.42, stdev=20.32 00:37:14.562 lat (msec): min=17, max=131, avg=54.45, stdev=20.31 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 37], 00:37:14.562 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 58], 00:37:14.562 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 85], 95.00th=[ 96], 00:37:14.562 | 99.00th=[ 114], 99.50th=[ 114], 99.90th=[ 132], 99.95th=[ 132], 00:37:14.562 | 99.99th=[ 132] 00:37:14.562 bw ( KiB/s): min= 768, max= 1584, per=2.30%, avg=1168.45, stdev=270.67, samples=20 00:37:14.562 iops : min= 192, max= 396, avg=292.05, stdev=67.62, samples=20 00:37:14.562 lat (msec) : 20=0.24%, 50=51.94%, 100=43.57%, 250=4.25% 00:37:14.562 cpu : usr=35.62%, sys=0.66%, ctx=1006, majf=0, minf=9 00:37:14.562 IO depths : 1=1.6%, 2=3.6%, 4=11.5%, 8=71.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=2938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129231: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=279, BW=1118KiB/s (1145kB/s)(10.9MiB/10009msec) 00:37:14.562 slat (usec): min=4, max=8037, avg=14.87, stdev=151.95 00:37:14.562 clat (msec): min=19, max=126, avg=57.11, stdev=20.25 00:37:14.562 lat (msec): min=19, max=126, avg=57.13, stdev=20.25 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 40], 00:37:14.562 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 60], 00:37:14.562 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 00:37:14.562 | 99.00th=[ 115], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:37:14.562 | 99.99th=[ 127] 00:37:14.562 bw ( KiB/s): min= 768, max= 1560, per=2.20%, avg=1119.58, stdev=278.11, samples=19 00:37:14.562 iops : min= 192, max= 390, avg=279.89, stdev=69.53, samples=19 00:37:14.562 lat (msec) : 20=0.14%, 50=44.91%, 100=52.48%, 250=2.47% 00:37:14.562 cpu : usr=39.67%, sys=0.68%, ctx=1203, majf=0, minf=9 00:37:14.562 IO depths : 1=2.6%, 2=5.8%, 4=16.4%, 8=64.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=2797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129232: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10052msec) 00:37:14.562 slat (usec): min=6, max=5001, avg=16.30, stdev=128.71 00:37:14.562 clat (msec): min=6, max=114, avg=45.24, stdev=15.37 00:37:14.562 lat (msec): min=6, max=114, avg=45.25, stdev=15.38 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 34], 00:37:14.562 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 47], 00:37:14.562 | 70.00th=[ 50], 80.00th=[ 56], 90.00th=[ 66], 95.00th=[ 73], 00:37:14.562 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 115], 99.95th=[ 115], 00:37:14.562 | 99.99th=[ 115] 00:37:14.562 bw ( KiB/s): min= 896, max= 2028, per=2.78%, avg=1411.25, stdev=259.98, samples=20 00:37:14.562 iops : min= 224, max= 507, avg=352.75, stdev=64.99, samples=20 00:37:14.562 lat (msec) : 10=0.90%, 20=0.73%, 50=69.46%, 100=28.22%, 250=0.68% 00:37:14.562 cpu : usr=43.32%, sys=1.03%, ctx=1456, majf=0, minf=9 00:37:14.562 IO depths : 1=1.0%, 2=2.2%, 4=9.7%, 8=74.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=3543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129233: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=297, BW=1191KiB/s (1220kB/s)(11.7MiB/10035msec) 00:37:14.562 slat (usec): min=6, max=8018, avg=17.05, stdev=170.80 00:37:14.562 clat (msec): min=23, max=143, avg=53.55, stdev=18.07 00:37:14.562 lat (msec): min=23, max=143, avg=53.57, stdev=18.07 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:37:14.562 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 55], 00:37:14.562 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 79], 95.00th=[ 90], 00:37:14.562 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:37:14.562 | 99.99th=[ 144] 00:37:14.562 bw ( KiB/s): min= 760, max= 1536, per=2.34%, avg=1190.20, stdev=228.42, samples=20 00:37:14.562 iops : min= 190, max= 384, avg=297.55, stdev=57.11, samples=20 00:37:14.562 lat (msec) : 50=53.96%, 100=43.79%, 250=2.24% 00:37:14.562 cpu : usr=39.18%, sys=0.73%, ctx=1260, majf=0, minf=9 00:37:14.562 IO depths : 1=1.9%, 2=4.2%, 4=12.3%, 8=69.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=2989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129234: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=308, BW=1234KiB/s (1264kB/s)(12.1MiB/10013msec) 00:37:14.562 slat (usec): min=4, max=8049, avg=17.24, stdev=204.26 00:37:14.562 clat (msec): min=21, max=142, avg=51.75, stdev=17.93 00:37:14.562 lat (msec): min=21, max=142, avg=51.77, stdev=17.93 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 37], 00:37:14.562 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 52], 00:37:14.562 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 86], 00:37:14.562 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 144], 99.95th=[ 144], 00:37:14.562 | 99.99th=[ 144] 00:37:14.562 bw ( KiB/s): min= 776, max= 1544, per=2.45%, avg=1246.74, stdev=244.26, samples=19 00:37:14.562 iops : min= 194, max= 386, avg=311.68, stdev=61.07, samples=19 00:37:14.562 lat (msec) : 50=56.52%, 100=41.47%, 250=2.01% 00:37:14.562 cpu : usr=42.93%, sys=0.85%, ctx=1468, majf=0, minf=9 00:37:14.562 IO depths : 1=1.7%, 2=4.0%, 4=11.4%, 8=70.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=90.7%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=3089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename1: (groupid=0, jobs=1): err= 0: pid=129235: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=303, BW=1216KiB/s (1245kB/s)(11.9MiB/10045msec) 00:37:14.562 slat (usec): min=4, max=8048, avg=16.57, stdev=205.20 00:37:14.562 clat (msec): min=20, max=123, avg=52.48, stdev=18.29 00:37:14.562 lat (msec): min=20, max=123, avg=52.50, stdev=18.29 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 36], 00:37:14.562 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 57], 00:37:14.562 | 70.00th=[ 60], 80.00th=[ 67], 90.00th=[ 80], 95.00th=[ 86], 00:37:14.562 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 124], 00:37:14.562 | 99.99th=[ 124] 00:37:14.562 bw ( KiB/s): min= 768, max= 1616, per=2.39%, avg=1216.10, stdev=238.49, samples=20 00:37:14.562 iops : min= 192, max= 404, avg=304.00, stdev=59.60, samples=20 00:37:14.562 lat (msec) : 50=54.73%, 100=43.33%, 250=1.93% 00:37:14.562 cpu : usr=32.56%, sys=0.65%, ctx=934, majf=0, minf=9 00:37:14.562 IO depths : 1=1.1%, 2=2.5%, 4=9.6%, 8=74.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=3053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename2: (groupid=0, jobs=1): err= 0: pid=129236: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=1050, BW=4202KiB/s (4302kB/s)(41.0MiB/10001msec) 00:37:14.562 slat (nsec): min=3556, max=99458, avg=13763.36, stdev=9845.83 00:37:14.562 clat (msec): min=6, max=215, avg=15.13, stdev=19.65 00:37:14.562 lat (msec): min=6, max=215, avg=15.14, stdev=19.65 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:37:14.562 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:37:14.562 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 20], 95.00th=[ 66], 00:37:14.562 | 99.00th=[ 95], 99.50th=[ 107], 99.90th=[ 180], 99.95th=[ 180], 00:37:14.562 | 99.99th=[ 215] 00:37:14.562 bw ( KiB/s): min= 640, max= 8320, per=8.09%, avg=4112.32, stdev=3021.91, samples=19 00:37:14.562 iops : min= 160, max= 2080, avg=1028.05, stdev=755.49, samples=19 00:37:14.562 lat (msec) : 10=54.53%, 20=35.51%, 50=2.07%, 100=7.20%, 250=0.70% 00:37:14.562 cpu : usr=70.96%, sys=1.43%, ctx=760, majf=0, minf=9 00:37:14.562 IO depths : 1=3.7%, 2=7.4%, 4=17.5%, 8=62.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=91.9%, 8=2.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 issued rwts: total=10505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.562 filename2: (groupid=0, jobs=1): err= 0: pid=129237: Sat Dec 14 18:57:28 2024 00:37:14.562 read: IOPS=328, BW=1315KiB/s (1346kB/s)(12.9MiB/10042msec) 00:37:14.562 slat (usec): min=4, max=8026, avg=23.61, stdev=304.90 00:37:14.562 clat (msec): min=4, max=122, avg=48.51, stdev=16.49 00:37:14.562 lat (msec): min=4, max=122, avg=48.53, stdev=16.49 00:37:14.562 clat percentiles (msec): 00:37:14.562 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 36], 00:37:14.562 | 30.00th=[ 39], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 48], 00:37:14.562 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 74], 00:37:14.562 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 124], 99.95th=[ 124], 00:37:14.562 | 99.99th=[ 124] 00:37:14.562 bw ( KiB/s): min= 992, max= 2048, per=2.58%, avg=1314.35, stdev=239.58, samples=20 00:37:14.562 iops : min= 248, max= 512, avg=328.50, stdev=59.88, samples=20 00:37:14.562 lat (msec) : 10=1.45%, 20=0.27%, 50=65.01%, 100=31.81%, 250=1.45% 00:37:14.562 cpu : usr=33.98%, sys=0.80%, ctx=960, majf=0, minf=9 00:37:14.562 IO depths : 1=1.2%, 2=2.5%, 4=9.2%, 8=74.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:37:14.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.562 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=3301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129238: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=315, BW=1263KiB/s (1293kB/s)(12.4MiB/10037msec) 00:37:14.563 slat (usec): min=3, max=8044, avg=23.16, stdev=293.53 00:37:14.563 clat (msec): min=8, max=146, avg=50.54, stdev=18.29 00:37:14.563 lat (msec): min=8, max=146, avg=50.57, stdev=18.30 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 21], 5.00th=[ 25], 10.00th=[ 32], 20.00th=[ 35], 00:37:14.563 | 30.00th=[ 39], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:37:14.563 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 75], 95.00th=[ 84], 00:37:14.563 | 99.00th=[ 107], 99.50th=[ 122], 99.90th=[ 146], 99.95th=[ 146], 00:37:14.563 | 99.99th=[ 146] 00:37:14.563 bw ( KiB/s): min= 680, max= 1824, per=2.48%, avg=1260.20, stdev=295.28, samples=20 00:37:14.563 iops : min= 170, max= 456, avg=315.05, stdev=73.82, samples=20 00:37:14.563 lat (msec) : 10=0.44%, 20=0.32%, 50=62.01%, 100=35.97%, 250=1.26% 00:37:14.563 cpu : usr=32.70%, sys=0.50%, ctx=932, majf=0, minf=9 00:37:14.563 IO depths : 1=0.2%, 2=0.5%, 4=4.8%, 8=80.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=88.8%, 8=7.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=3169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129239: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=339, BW=1358KiB/s (1391kB/s)(13.3MiB/10003msec) 00:37:14.563 slat (usec): min=6, max=4044, avg=15.76, stdev=119.10 00:37:14.563 clat (msec): min=4, max=107, avg=47.02, stdev=18.90 00:37:14.563 lat (msec): min=4, max=108, avg=47.04, stdev=18.90 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 32], 00:37:14.563 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 48], 00:37:14.563 | 70.00th=[ 56], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 85], 00:37:14.563 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:37:14.563 | 99.99th=[ 108] 00:37:14.563 bw ( KiB/s): min= 896, max= 2427, per=2.71%, avg=1376.11, stdev=388.23, samples=19 00:37:14.563 iops : min= 224, max= 606, avg=343.89, stdev=96.95, samples=19 00:37:14.563 lat (msec) : 10=1.41%, 20=0.88%, 50=62.72%, 100=33.95%, 250=1.03% 00:37:14.563 cpu : usr=44.19%, sys=0.78%, ctx=1191, majf=0, minf=9 00:37:14.563 IO depths : 1=1.9%, 2=3.8%, 4=11.6%, 8=71.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=3396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129240: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=348, BW=1394KiB/s (1428kB/s)(13.6MiB/10021msec) 00:37:14.563 slat (usec): min=6, max=7053, avg=18.25, stdev=176.89 00:37:14.563 clat (msec): min=14, max=119, avg=45.76, stdev=16.63 00:37:14.563 lat (msec): min=14, max=119, avg=45.78, stdev=16.63 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 33], 00:37:14.563 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 43], 60.00th=[ 47], 00:37:14.563 | 70.00th=[ 51], 80.00th=[ 59], 90.00th=[ 67], 95.00th=[ 75], 00:37:14.563 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 118], 99.95th=[ 121], 00:37:14.563 | 99.99th=[ 121] 00:37:14.563 bw ( KiB/s): min= 936, max= 1808, per=2.74%, avg=1391.95, stdev=303.77, samples=20 00:37:14.563 iops : min= 234, max= 452, avg=347.95, stdev=75.95, samples=20 00:37:14.563 lat (msec) : 20=1.09%, 50=68.68%, 100=29.23%, 250=1.00% 00:37:14.563 cpu : usr=44.04%, sys=0.75%, ctx=1277, majf=0, minf=9 00:37:14.563 IO depths : 1=1.7%, 2=3.7%, 4=12.6%, 8=70.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=3493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129241: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=939, BW=3758KiB/s (3849kB/s)(36.7MiB/10001msec) 00:37:14.563 slat (usec): min=3, max=8023, avg=13.92, stdev=92.85 00:37:14.563 clat (msec): min=8, max=210, avg=16.92, stdev=20.35 00:37:14.563 lat (msec): min=8, max=210, avg=16.93, stdev=20.36 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:37:14.563 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:14.563 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 45], 95.00th=[ 71], 00:37:14.563 | 99.00th=[ 95], 99.50th=[ 109], 99.90th=[ 188], 99.95th=[ 211], 00:37:14.563 | 99.99th=[ 211] 00:37:14.563 bw ( KiB/s): min= 560, max= 6400, per=7.20%, avg=3660.84, stdev=2606.67, samples=19 00:37:14.563 iops : min= 140, max= 1600, avg=915.21, stdev=651.67, samples=19 00:37:14.563 lat (msec) : 10=18.59%, 20=70.34%, 50=2.25%, 100=8.07%, 250=0.76% 00:37:14.563 cpu : usr=67.82%, sys=1.62%, ctx=719, majf=0, minf=9 00:37:14.563 IO depths : 1=5.6%, 2=11.3%, 4=23.2%, 8=52.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=9397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129242: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=310, BW=1241KiB/s (1270kB/s)(12.2MiB/10031msec) 00:37:14.563 slat (usec): min=6, max=8022, avg=16.71, stdev=176.18 00:37:14.563 clat (msec): min=18, max=141, avg=51.48, stdev=20.27 00:37:14.563 lat (msec): min=18, max=141, avg=51.49, stdev=20.27 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 35], 00:37:14.563 | 30.00th=[ 40], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 53], 00:37:14.563 | 70.00th=[ 60], 80.00th=[ 68], 90.00th=[ 81], 95.00th=[ 87], 00:37:14.563 | 99.00th=[ 108], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:37:14.563 | 99.99th=[ 142] 00:37:14.563 bw ( KiB/s): min= 728, max= 1888, per=2.43%, avg=1237.50, stdev=367.97, samples=20 00:37:14.563 iops : min= 182, max= 472, avg=309.35, stdev=91.99, samples=20 00:37:14.563 lat (msec) : 20=0.32%, 50=56.54%, 100=40.47%, 250=2.67% 00:37:14.563 cpu : usr=40.26%, sys=0.63%, ctx=1074, majf=0, minf=9 00:37:14.563 IO depths : 1=1.0%, 2=2.3%, 4=9.2%, 8=75.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=3111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 filename2: (groupid=0, jobs=1): err= 0: pid=129243: Sat Dec 14 18:57:28 2024 00:37:14.563 read: IOPS=1065, BW=4261KiB/s (4363kB/s)(41.6MiB/10007msec) 00:37:14.563 slat (usec): min=3, max=8041, avg=16.92, stdev=103.45 00:37:14.563 clat (msec): min=2, max=193, avg=14.89, stdev=19.67 00:37:14.563 lat (msec): min=2, max=193, avg=14.90, stdev=19.67 00:37:14.563 clat percentiles (msec): 00:37:14.563 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:37:14.563 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:37:14.563 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 67], 00:37:14.563 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 176], 99.95th=[ 176], 00:37:14.563 | 99.99th=[ 194] 00:37:14.563 bw ( KiB/s): min= 590, max= 8320, per=8.37%, avg=4257.50, stdev=3204.51, samples=20 00:37:14.563 iops : min= 147, max= 2080, avg=1064.35, stdev=801.16, samples=20 00:37:14.563 lat (msec) : 4=0.08%, 10=59.11%, 20=31.53%, 50=0.92%, 100=7.45% 00:37:14.563 lat (msec) : 250=0.91% 00:37:14.563 cpu : usr=72.65%, sys=1.73%, ctx=672, majf=0, minf=9 00:37:14.563 IO depths : 1=3.6%, 2=7.4%, 4=15.0%, 8=65.1%, 16=8.9%, 32=0.0%, >=64=0.0% 00:37:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 complete : 0=0.0%, 4=91.9%, 8=2.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.563 issued rwts: total=10660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:14.563 00:37:14.563 Run status group 0 (all jobs): 00:37:14.563 READ: bw=49.7MiB/s (52.1MB/s), 1118KiB/s-4261KiB/s (1145kB/s-4363kB/s), io=499MiB (523MB), run=10001-10052msec 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:14.563 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 bdev_null0 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 [2024-12-14 18:57:28.409360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 bdev_null1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:14.564 { 00:37:14.564 "params": { 00:37:14.564 "name": "Nvme$subsystem", 00:37:14.564 "trtype": "$TEST_TRANSPORT", 00:37:14.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.564 "adrfam": "ipv4", 00:37:14.564 "trsvcid": "$NVMF_PORT", 00:37:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.564 "hdgst": ${hdgst:-false}, 00:37:14.564 "ddgst": ${ddgst:-false} 00:37:14.564 }, 00:37:14.564 "method": "bdev_nvme_attach_controller" 00:37:14.564 } 00:37:14.564 EOF 00:37:14.564 )") 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:14.564 { 00:37:14.564 "params": { 00:37:14.564 "name": "Nvme$subsystem", 00:37:14.564 "trtype": "$TEST_TRANSPORT", 00:37:14.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.564 "adrfam": "ipv4", 00:37:14.564 "trsvcid": "$NVMF_PORT", 00:37:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.564 "hdgst": ${hdgst:-false}, 00:37:14.564 "ddgst": ${ddgst:-false} 00:37:14.564 }, 00:37:14.564 "method": "bdev_nvme_attach_controller" 00:37:14.564 } 00:37:14.564 EOF 00:37:14.564 )") 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:14.564 "params": { 00:37:14.564 "name": "Nvme0", 00:37:14.564 "trtype": "tcp", 00:37:14.564 "traddr": "10.0.0.3", 00:37:14.564 "adrfam": "ipv4", 00:37:14.564 "trsvcid": "4420", 00:37:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.564 "hdgst": false, 00:37:14.564 "ddgst": false 00:37:14.564 }, 00:37:14.564 "method": "bdev_nvme_attach_controller" 00:37:14.564 },{ 00:37:14.564 "params": { 00:37:14.564 "name": "Nvme1", 00:37:14.564 "trtype": "tcp", 00:37:14.564 "traddr": "10.0.0.3", 00:37:14.564 "adrfam": "ipv4", 00:37:14.564 "trsvcid": "4420", 00:37:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:14.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:14.564 "hdgst": false, 00:37:14.564 "ddgst": false 00:37:14.564 }, 00:37:14.564 "method": "bdev_nvme_attach_controller" 00:37:14.564 }' 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.564 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:14.565 18:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.565 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:14.565 ... 00:37:14.565 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:14.565 ... 00:37:14.565 fio-3.35 00:37:14.565 Starting 4 threads 00:37:18.813 00:37:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=129416: Sat Dec 14 18:57:34 2024 00:37:18.813 read: IOPS=2258, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5001msec) 00:37:18.813 slat (nsec): min=3394, max=94651, avg=23165.00, stdev=8934.37 00:37:18.813 clat (usec): min=1802, max=9741, avg=3437.72, stdev=385.44 00:37:18.813 lat (usec): min=1822, max=9764, avg=3460.89, stdev=383.73 00:37:18.813 clat percentiles (usec): 00:37:18.813 | 1.00th=[ 3097], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261], 00:37:18.813 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:37:18.813 | 70.00th=[ 3425], 80.00th=[ 3523], 90.00th=[ 3654], 95.00th=[ 3949], 00:37:18.813 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6456], 99.95th=[ 7570], 00:37:18.813 | 99.99th=[ 7635] 00:37:18.813 bw ( KiB/s): min=16256, max=18560, per=24.93%, avg=18019.56, stdev=780.64, samples=9 00:37:18.813 iops : min= 2032, max= 2320, avg=2252.44, stdev=97.58, samples=9 00:37:18.813 lat (msec) : 2=0.13%, 4=95.60%, 10=4.27% 00:37:18.813 cpu : usr=95.60%, sys=3.16%, ctx=13, majf=0, minf=10 00:37:18.813 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.813 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.813 issued rwts: total=11296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.813 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=129417: Sat Dec 14 18:57:34 2024 00:37:18.813 read: IOPS=2259, BW=17.7MiB/s (18.5MB/s)(88.3MiB/5002msec) 00:37:18.813 slat (nsec): min=3424, max=71461, avg=9097.18, stdev=5838.12 00:37:18.813 clat (usec): min=1611, max=9820, avg=3492.29, stdev=370.96 00:37:18.813 lat (usec): min=1618, max=9847, avg=3501.39, stdev=371.24 00:37:18.813 clat percentiles (usec): 00:37:18.813 | 1.00th=[ 3195], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326], 00:37:18.813 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3425], 00:37:18.813 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3687], 95.00th=[ 3949], 00:37:18.813 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6456], 99.95th=[ 7570], 00:37:18.813 | 99.99th=[ 7570] 00:37:18.813 bw ( KiB/s): min=16512, max=18688, per=24.97%, avg=18048.00, stdev=735.30, samples=9 00:37:18.813 iops : min= 2064, max= 2336, avg=2256.00, stdev=91.91, samples=9 00:37:18.813 lat (msec) : 2=0.14%, 4=95.38%, 10=4.48% 00:37:18.813 cpu : usr=94.88%, sys=3.86%, ctx=55, majf=0, minf=9 00:37:18.813 IO depths : 1=11.8%, 2=24.9%, 4=50.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.813 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.813 issued rwts: total=11304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.813 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:18.813 filename1: (groupid=0, jobs=1): err= 0: pid=129418: Sat Dec 14 18:57:34 2024 00:37:18.813 read: IOPS=2259, BW=17.7MiB/s (18.5MB/s)(88.3MiB/5001msec) 00:37:18.813 slat (nsec): min=6066, max=88570, avg=23000.50, stdev=8563.38 00:37:18.813 clat (usec): min=402, max=9744, avg=3427.79, stdev=393.02 00:37:18.813 lat (usec): min=408, max=9751, avg=3450.79, stdev=392.45 00:37:18.813 clat percentiles (usec): 00:37:18.813 | 1.00th=[ 3097], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261], 00:37:18.813 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3392], 00:37:18.813 | 70.00th=[ 3425], 80.00th=[ 3490], 90.00th=[ 3654], 95.00th=[ 3916], 00:37:18.813 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 6521], 99.95th=[ 7570], 00:37:18.813 | 99.99th=[ 7767] 00:37:18.813 bw ( KiB/s): min=16256, max=18560, per=24.93%, avg=18019.56, stdev=780.64, samples=9 00:37:18.813 iops : min= 2032, max= 2320, avg=2252.44, stdev=97.58, samples=9 00:37:18.814 lat (usec) : 500=0.03% 00:37:18.814 lat (msec) : 2=0.13%, 4=95.70%, 10=4.14% 00:37:18.814 cpu : usr=95.54%, sys=3.34%, ctx=8, majf=0, minf=9 00:37:18.814 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.814 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.814 issued rwts: total=11299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.814 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=129419: Sat Dec 14 18:57:34 2024 00:37:18.814 read: IOPS=2258, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5001msec) 00:37:18.814 slat (nsec): min=6238, max=94257, avg=15202.39, stdev=11042.95 00:37:18.814 clat (usec): min=2011, max=7703, avg=3478.66, stdev=367.33 00:37:18.814 lat (usec): min=2017, max=7713, avg=3493.86, stdev=364.77 00:37:18.814 clat percentiles (usec): 00:37:18.814 | 1.00th=[ 3097], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3294], 00:37:18.814 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:37:18.814 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3720], 95.00th=[ 3949], 00:37:18.814 | 99.00th=[ 5342], 99.50th=[ 5342], 99.90th=[ 6521], 99.95th=[ 7308], 00:37:18.814 | 99.99th=[ 7308] 00:37:18.814 bw ( KiB/s): min=16384, max=18688, per=24.95%, avg=18033.78, stdev=750.31, samples=9 00:37:18.814 iops : min= 2048, max= 2336, avg=2254.22, stdev=93.79, samples=9 00:37:18.814 lat (msec) : 4=95.44%, 10=4.56% 00:37:18.814 cpu : usr=94.98%, sys=3.68%, ctx=4, majf=0, minf=9 00:37:18.814 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.814 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.814 issued rwts: total=11296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.814 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:18.814 00:37:18.814 Run status group 0 (all jobs): 00:37:18.814 READ: bw=70.6MiB/s (74.0MB/s), 17.6MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=353MiB (370MB), run=5001-5002msec 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:18.814 ************************************ 00:37:18.814 END TEST fio_dif_rand_params 00:37:18.814 ************************************ 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.814 00:37:18.814 real 0m29.186s 00:37:18.814 user 2m54.344s 00:37:18.814 sys 0m4.610s 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:18.814 18:57:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 18:57:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:19.073 18:57:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:19.073 18:57:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:19.073 18:57:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 ************************************ 00:37:19.073 START TEST fio_dif_digest 00:37:19.073 ************************************ 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 bdev_null0 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:19.073 [2024-12-14 18:57:34.626456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.073 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:19.074 { 00:37:19.074 "params": { 00:37:19.074 "name": "Nvme$subsystem", 00:37:19.074 "trtype": "$TEST_TRANSPORT", 00:37:19.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:19.074 "adrfam": "ipv4", 00:37:19.074 "trsvcid": "$NVMF_PORT", 00:37:19.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:19.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:19.074 "hdgst": ${hdgst:-false}, 00:37:19.074 "ddgst": ${ddgst:-false} 00:37:19.074 }, 00:37:19.074 "method": "bdev_nvme_attach_controller" 00:37:19.074 } 00:37:19.074 EOF 00:37:19.074 )") 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:19.074 "params": { 00:37:19.074 "name": "Nvme0", 00:37:19.074 "trtype": "tcp", 00:37:19.074 "traddr": "10.0.0.3", 00:37:19.074 "adrfam": "ipv4", 00:37:19.074 "trsvcid": "4420", 00:37:19.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.074 "hdgst": true, 00:37:19.074 "ddgst": true 00:37:19.074 }, 00:37:19.074 "method": "bdev_nvme_attach_controller" 00:37:19.074 }' 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:19.074 18:57:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.333 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:19.333 ... 00:37:19.333 fio-3.35 00:37:19.333 Starting 3 threads 00:37:31.543 00:37:31.543 filename0: (groupid=0, jobs=1): err= 0: pid=129520: Sat Dec 14 18:57:45 2024 00:37:31.543 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(275MiB/10005msec) 00:37:31.543 slat (nsec): min=6595, max=56736, avg=15324.69, stdev=5797.75 00:37:31.543 clat (usec): min=6679, max=54618, avg=13649.05, stdev=3354.13 00:37:31.543 lat (usec): min=6690, max=54644, avg=13664.38, stdev=3353.44 00:37:31.543 clat percentiles (usec): 00:37:31.543 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9896], 00:37:31.543 | 30.00th=[13304], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:37:31.543 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:37:31.543 | 99.00th=[19268], 99.50th=[23725], 99.90th=[51119], 99.95th=[54789], 00:37:31.543 | 99.99th=[54789] 00:37:31.543 bw ( KiB/s): min=22784, max=32256, per=31.21%, avg=28083.20, stdev=2462.62, samples=20 00:37:31.543 iops : min= 178, max= 252, avg=219.40, stdev=19.24, samples=20 00:37:31.543 lat (msec) : 10=20.99%, 20=78.05%, 50=0.82%, 100=0.14% 00:37:31.543 cpu : usr=94.11%, sys=4.45%, ctx=26, majf=0, minf=0 00:37:31.543 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.543 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.543 filename0: (groupid=0, jobs=1): err= 0: pid=129521: Sat Dec 14 18:57:45 2024 00:37:31.543 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(323MiB/10006msec) 00:37:31.543 slat (usec): min=6, max=278, avg=17.01, stdev= 9.10 00:37:31.543 clat (usec): min=5464, max=43851, avg=11580.71, stdev=2956.98 00:37:31.544 lat (usec): min=5471, max=43877, avg=11597.72, stdev=2957.08 00:37:31.544 clat percentiles (usec): 00:37:31.544 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 8225], 00:37:31.544 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:37:31.544 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14091], 95.00th=[14615], 00:37:31.544 | 99.00th=[16581], 99.50th=[19530], 99.90th=[42206], 99.95th=[43779], 00:37:31.544 | 99.99th=[43779] 00:37:31.544 bw ( KiB/s): min=26624, max=38144, per=36.76%, avg=33075.20, stdev=2779.13, samples=20 00:37:31.544 iops : min= 208, max= 298, avg=258.40, stdev=21.71, samples=20 00:37:31.544 lat (msec) : 10=26.56%, 20=73.02%, 50=0.43% 00:37:31.544 cpu : usr=93.48%, sys=4.82%, ctx=169, majf=0, minf=9 00:37:31.544 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.544 issued rwts: total=2587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.544 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.544 filename0: (groupid=0, jobs=1): err= 0: pid=129522: Sat Dec 14 18:57:45 2024 00:37:31.544 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(285MiB/10046msec) 00:37:31.544 slat (nsec): min=6198, max=58789, avg=14176.50, stdev=5738.69 00:37:31.544 clat (usec): min=8520, max=58300, avg=13194.90, stdev=9948.96 00:37:31.544 lat (usec): min=8541, max=58311, avg=13209.07, stdev=9948.90 00:37:31.544 clat percentiles (usec): 00:37:31.544 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:37:31.544 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:37:31.544 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[50594], 00:37:31.544 | 99.00th=[51643], 99.50th=[52167], 99.90th=[56886], 99.95th=[57410], 00:37:31.544 | 99.99th=[58459] 00:37:31.544 bw ( KiB/s): min=22016, max=37120, per=32.37%, avg=29120.00, stdev=4171.00, samples=20 00:37:31.544 iops : min= 172, max= 290, avg=227.50, stdev=32.59, samples=20 00:37:31.544 lat (msec) : 10=20.41%, 20=72.91%, 50=0.92%, 100=5.75% 00:37:31.544 cpu : usr=94.53%, sys=4.07%, ctx=95, majf=0, minf=9 00:37:31.544 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.544 issued rwts: total=2278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.544 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:31.544 00:37:31.544 Run status group 0 (all jobs): 00:37:31.544 READ: bw=87.9MiB/s (92.1MB/s), 27.4MiB/s-32.3MiB/s (28.8MB/s-33.9MB/s), io=883MiB (925MB), run=10005-10046msec 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.544 ************************************ 00:37:31.544 END TEST fio_dif_digest 00:37:31.544 ************************************ 00:37:31.544 00:37:31.544 real 0m11.023s 00:37:31.544 user 0m28.943s 00:37:31.544 sys 0m1.606s 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.544 18:57:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.544 18:57:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:31.544 18:57:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.544 rmmod nvme_tcp 00:37:31.544 rmmod nvme_fabrics 00:37:31.544 rmmod nvme_keyring 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 128745 ']' 00:37:31.544 18:57:45 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 128745 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 128745 ']' 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 128745 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128745 00:37:31.544 killing process with pid 128745 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128745' 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@969 -- # kill 128745 00:37:31.544 18:57:45 nvmf_dif -- common/autotest_common.sh@974 -- # wait 128745 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:31.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:31.544 Waiting for block devices as requested 00:37:31.544 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:31.544 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.544 18:57:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:31.544 18:57:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.544 18:57:46 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:37:31.544 00:37:31.544 real 1m5.505s 00:37:31.544 user 4m43.540s 00:37:31.544 sys 0m15.382s 00:37:31.544 18:57:46 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.544 18:57:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:31.544 ************************************ 00:37:31.544 END TEST nvmf_dif 00:37:31.544 ************************************ 00:37:31.544 18:57:46 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:31.544 18:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:31.544 18:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.544 18:57:46 -- common/autotest_common.sh@10 -- # set +x 00:37:31.544 ************************************ 00:37:31.544 START TEST nvmf_abort_qd_sizes 00:37:31.544 ************************************ 00:37:31.544 18:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:31.544 * Looking for test storage... 00:37:31.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:31.544 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:31.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.545 --rc genhtml_branch_coverage=1 00:37:31.545 --rc genhtml_function_coverage=1 00:37:31.545 --rc genhtml_legend=1 00:37:31.545 --rc geninfo_all_blocks=1 00:37:31.545 --rc geninfo_unexecuted_blocks=1 00:37:31.545 00:37:31.545 ' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:31.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.545 --rc genhtml_branch_coverage=1 00:37:31.545 --rc genhtml_function_coverage=1 00:37:31.545 --rc genhtml_legend=1 00:37:31.545 --rc geninfo_all_blocks=1 00:37:31.545 --rc geninfo_unexecuted_blocks=1 00:37:31.545 00:37:31.545 ' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:31.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.545 --rc genhtml_branch_coverage=1 00:37:31.545 --rc genhtml_function_coverage=1 00:37:31.545 --rc genhtml_legend=1 00:37:31.545 --rc geninfo_all_blocks=1 00:37:31.545 --rc geninfo_unexecuted_blocks=1 00:37:31.545 00:37:31.545 ' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:31.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.545 --rc genhtml_branch_coverage=1 00:37:31.545 --rc genhtml_function_coverage=1 00:37:31.545 --rc genhtml_legend=1 00:37:31.545 --rc geninfo_all_blocks=1 00:37:31.545 --rc geninfo_unexecuted_blocks=1 00:37:31.545 00:37:31.545 ' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:31.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:31.545 Cannot find device "nvmf_init_br" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:31.545 Cannot find device "nvmf_init_br2" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:31.545 Cannot find device "nvmf_tgt_br" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:31.545 Cannot find device "nvmf_tgt_br2" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:31.545 Cannot find device "nvmf_init_br" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:31.545 Cannot find device "nvmf_init_br2" 00:37:31.545 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:37:31.546 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:31.546 Cannot find device "nvmf_tgt_br" 00:37:31.546 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:37:31.546 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:31.805 Cannot find device "nvmf_tgt_br2" 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:31.805 Cannot find device "nvmf_br" 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:31.805 Cannot find device "nvmf_init_if" 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:31.805 Cannot find device "nvmf_init_if2" 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:31.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:31.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:31.805 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:32.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:32.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:37:32.064 00:37:32.064 --- 10.0.0.3 ping statistics --- 00:37:32.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.064 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:32.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:32.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:37:32.064 00:37:32.064 --- 10.0.0.4 ping statistics --- 00:37:32.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.064 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:32.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:32.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:37:32.064 00:37:32.064 --- 10.0.0.1 ping statistics --- 00:37:32.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.064 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:32.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:32.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:37:32.064 00:37:32.064 --- 10.0.0.2 ping statistics --- 00:37:32.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.064 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:37:32.064 18:57:47 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:32.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:32.632 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:32.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=130179 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 130179 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 130179 ']' 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.891 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.891 [2024-12-14 18:57:48.588873] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:32.891 [2024-12-14 18:57:48.588962] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.150 [2024-12-14 18:57:48.732514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:33.150 [2024-12-14 18:57:48.830301] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.150 [2024-12-14 18:57:48.830402] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.150 [2024-12-14 18:57:48.830429] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.150 [2024-12-14 18:57:48.830449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.150 [2024-12-14 18:57:48.830466] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.150 [2024-12-14 18:57:48.830657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.150 [2024-12-14 18:57:48.831195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.150 [2024-12-14 18:57:48.831422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:33.150 [2024-12-14 18:57:48.831460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.409 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.409 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:37:33.409 18:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:33.409 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:33.409 18:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:37:33.409 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:33.410 18:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.410 ************************************ 00:37:33.410 START TEST spdk_target_abort 00:37:33.410 ************************************ 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.410 spdk_targetn1 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.410 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.410 [2024-12-14 18:57:49.158070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.669 [2024-12-14 18:57:49.186258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:37:33.669 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:33.670 18:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.961 Initializing NVMe Controllers 00:37:36.961 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:37:36.961 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:36.961 Initialization complete. Launching workers. 00:37:36.961 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11598, failed: 0 00:37:36.961 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1182, failed to submit 10416 00:37:36.961 success 811, unsuccessful 371, failed 0 00:37:36.961 18:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:36.961 18:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:40.246 Initializing NVMe Controllers 00:37:40.246 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:37:40.246 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:40.246 Initialization complete. Launching workers. 00:37:40.246 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5984, failed: 0 00:37:40.246 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 4729 00:37:40.246 success 240, unsuccessful 1015, failed 0 00:37:40.246 18:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:40.246 18:57:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.532 Initializing NVMe Controllers 00:37:43.533 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.533 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.533 Initialization complete. Launching workers. 00:37:43.533 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32063, failed: 0 00:37:43.533 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2598, failed to submit 29465 00:37:43.533 success 540, unsuccessful 2058, failed 0 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.533 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 130179 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 130179 ']' 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 130179 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:37:43.791 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130179 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:43.792 killing process with pid 130179 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130179' 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 130179 00:37:43.792 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 130179 00:37:44.050 00:37:44.050 real 0m10.568s 00:37:44.050 user 0m40.610s 00:37:44.050 sys 0m1.787s 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 ************************************ 00:37:44.050 END TEST spdk_target_abort 00:37:44.050 ************************************ 00:37:44.050 18:57:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:44.050 18:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:44.050 18:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:44.050 18:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 ************************************ 00:37:44.050 START TEST kernel_target_abort 00:37:44.050 ************************************ 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.050 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:44.051 18:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:44.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:44.568 Waiting for block devices as requested 00:37:44.568 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:44.568 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:44.568 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:37:44.827 No valid GPT data, bailing 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:37:44.827 No valid GPT data, bailing 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:37:44.827 No valid GPT data, bailing 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:37:44.827 No valid GPT data, bailing 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:44.827 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 --hostid=81b74ad8-8517-4157-88ee-39db73f4d0b6 -a 10.0.0.1 -t tcp -s 4420 00:37:44.828 00:37:44.828 Discovery Log Number of Records 2, Generation counter 2 00:37:44.828 =====Discovery Log Entry 0====== 00:37:44.828 trtype: tcp 00:37:44.828 adrfam: ipv4 00:37:44.828 subtype: current discovery subsystem 00:37:44.828 treq: not specified, sq flow control disable supported 00:37:44.828 portid: 1 00:37:44.828 trsvcid: 4420 00:37:44.828 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:44.828 traddr: 10.0.0.1 00:37:44.828 eflags: none 00:37:44.828 sectype: none 00:37:44.828 =====Discovery Log Entry 1====== 00:37:44.828 trtype: tcp 00:37:44.828 adrfam: ipv4 00:37:44.828 subtype: nvme subsystem 00:37:44.828 treq: not specified, sq flow control disable supported 00:37:44.828 portid: 1 00:37:44.828 trsvcid: 4420 00:37:44.828 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:44.828 traddr: 10.0.0.1 00:37:44.828 eflags: none 00:37:44.828 sectype: none 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:44.828 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:45.086 18:58:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:48.373 Initializing NVMe Controllers 00:37:48.373 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:48.373 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:48.373 Initialization complete. Launching workers. 00:37:48.373 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31123, failed: 0 00:37:48.373 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31123, failed to submit 0 00:37:48.373 success 0, unsuccessful 31123, failed 0 00:37:48.373 18:58:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:48.373 18:58:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:51.659 Initializing NVMe Controllers 00:37:51.659 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:51.659 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:51.659 Initialization complete. Launching workers. 00:37:51.659 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79757, failed: 0 00:37:51.659 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33929, failed to submit 45828 00:37:51.659 success 0, unsuccessful 33929, failed 0 00:37:51.659 18:58:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:51.659 18:58:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.947 Initializing NVMe Controllers 00:37:54.947 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:54.947 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:54.947 Initialization complete. Launching workers. 00:37:54.947 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80963, failed: 0 00:37:54.947 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20202, failed to submit 60761 00:37:54.947 success 0, unsuccessful 20202, failed 0 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:37:54.947 18:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:55.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:56.187 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:56.187 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:56.187 00:37:56.187 real 0m12.178s 00:37:56.187 user 0m5.940s 00:37:56.187 sys 0m3.627s 00:37:56.187 18:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:56.187 18:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.187 ************************************ 00:37:56.187 END TEST kernel_target_abort 00:37:56.187 ************************************ 00:37:56.187 18:58:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:56.187 18:58:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:56.187 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:56.187 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:56.446 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:56.446 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:56.446 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:56.446 18:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:56.446 rmmod nvme_tcp 00:37:56.446 rmmod nvme_fabrics 00:37:56.446 rmmod nvme_keyring 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 130179 ']' 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 130179 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 130179 ']' 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 130179 00:37:56.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (130179) - No such process 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 130179 is not found' 00:37:56.446 Process with pid 130179 is not found 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:37:56.446 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:56.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:56.705 Waiting for block devices as requested 00:37:56.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:56.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:56.964 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:37:57.223 00:37:57.223 real 0m25.953s 00:37:57.223 user 0m47.827s 00:37:57.223 sys 0m6.889s 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:57.223 ************************************ 00:37:57.223 END TEST nvmf_abort_qd_sizes 00:37:57.223 ************************************ 00:37:57.223 18:58:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:57.482 18:58:12 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:57.482 18:58:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:57.482 18:58:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:57.482 18:58:12 -- common/autotest_common.sh@10 -- # set +x 00:37:57.482 ************************************ 00:37:57.482 START TEST keyring_file 00:37:57.482 ************************************ 00:37:57.482 18:58:12 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:57.482 * Looking for test storage... 00:37:57.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:37:57.482 18:58:13 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:57.482 18:58:13 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:37:57.482 18:58:13 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:57.482 18:58:13 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.482 18:58:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:57.482 18:58:13 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.483 18:58:13 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.483 --rc genhtml_branch_coverage=1 00:37:57.483 --rc genhtml_function_coverage=1 00:37:57.483 --rc genhtml_legend=1 00:37:57.483 --rc geninfo_all_blocks=1 00:37:57.483 --rc geninfo_unexecuted_blocks=1 00:37:57.483 00:37:57.483 ' 00:37:57.483 18:58:13 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.483 --rc genhtml_branch_coverage=1 00:37:57.483 --rc genhtml_function_coverage=1 00:37:57.483 --rc genhtml_legend=1 00:37:57.483 --rc geninfo_all_blocks=1 00:37:57.483 --rc geninfo_unexecuted_blocks=1 00:37:57.483 00:37:57.483 ' 00:37:57.483 18:58:13 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.483 --rc genhtml_branch_coverage=1 00:37:57.483 --rc genhtml_function_coverage=1 00:37:57.483 --rc genhtml_legend=1 00:37:57.483 --rc geninfo_all_blocks=1 00:37:57.483 --rc geninfo_unexecuted_blocks=1 00:37:57.483 00:37:57.483 ' 00:37:57.483 18:58:13 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.483 --rc genhtml_branch_coverage=1 00:37:57.483 --rc genhtml_function_coverage=1 00:37:57.483 --rc genhtml_legend=1 00:37:57.483 --rc geninfo_all_blocks=1 00:37:57.483 --rc geninfo_unexecuted_blocks=1 00:37:57.483 00:37:57.483 ' 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:57.483 18:58:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.483 18:58:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.483 18:58:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.483 18:58:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.483 18:58:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.483 18:58:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.483 18:58:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.483 18:58:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:57.483 18:58:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:57.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:57.483 18:58:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4g5GZ7TNkP 00:37:57.483 18:58:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:37:57.483 18:58:13 keyring_file -- nvmf/common.sh@729 -- # python - 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4g5GZ7TNkP 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4g5GZ7TNkP 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4g5GZ7TNkP 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bu61htUkWj 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:37:57.741 18:58:13 keyring_file -- nvmf/common.sh@729 -- # python - 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bu61htUkWj 00:37:57.741 18:58:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bu61htUkWj 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bu61htUkWj 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=131080 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:57.741 18:58:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 131080 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 131080 ']' 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.741 18:58:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:57.741 [2024-12-14 18:58:13.418374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:57.741 [2024-12-14 18:58:13.418488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131080 ] 00:37:58.000 [2024-12-14 18:58:13.560113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.000 [2024-12-14 18:58:13.641365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.258 18:58:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:58.258 18:58:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:58.258 18:58:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:58.258 18:58:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.258 18:58:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:58.258 [2024-12-14 18:58:13.948094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.258 null0 00:37:58.258 [2024-12-14 18:58:13.980036] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:58.258 [2024-12-14 18:58:13.980222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.258 18:58:14 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.258 18:58:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:58.517 [2024-12-14 18:58:14.012084] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:58.517 2024/12/14 18:58:14 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:58.517 request: 00:37:58.517 { 00:37:58.517 "method": "nvmf_subsystem_add_listener", 00:37:58.517 "params": { 00:37:58.517 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:58.517 "secure_channel": false, 00:37:58.517 "listen_address": { 00:37:58.517 "trtype": "tcp", 00:37:58.517 "traddr": "127.0.0.1", 00:37:58.517 "trsvcid": "4420" 00:37:58.517 } 00:37:58.517 } 00:37:58.517 } 00:37:58.517 Got JSON-RPC error response 00:37:58.517 GoRPCClient: error on JSON-RPC call 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:58.517 18:58:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=131103 00:37:58.517 18:58:14 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:58.517 18:58:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 131103 /var/tmp/bperf.sock 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 131103 ']' 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:58.517 18:58:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:58.517 [2024-12-14 18:58:14.083319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:58.517 [2024-12-14 18:58:14.083430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131103 ] 00:37:58.517 [2024-12-14 18:58:14.224525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.775 [2024-12-14 18:58:14.305028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.775 18:58:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:58.775 18:58:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:58.776 18:58:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:37:58.776 18:58:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:37:59.034 18:58:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bu61htUkWj 00:37:59.034 18:58:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bu61htUkWj 00:37:59.292 18:58:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:59.292 18:58:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:59.292 18:58:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.292 18:58:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.292 18:58:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.551 18:58:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4g5GZ7TNkP == \/\t\m\p\/\t\m\p\.\4\g\5\G\Z\7\T\N\k\P ]] 00:37:59.551 18:58:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:59.551 18:58:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:59.551 18:58:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.551 18:58:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.551 18:58:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:59.810 18:58:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.bu61htUkWj == \/\t\m\p\/\t\m\p\.\b\u\6\1\h\t\U\k\W\j ]] 00:37:59.810 18:58:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:59.810 18:58:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:59.810 18:58:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.810 18:58:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.810 18:58:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.810 18:58:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.068 18:58:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:00.068 18:58:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:00.068 18:58:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.068 18:58:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.068 18:58:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.068 18:58:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.068 18:58:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.327 18:58:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:00.327 18:58:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.327 18:58:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.585 [2024-12-14 18:58:16.173425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:00.585 nvme0n1 00:38:00.585 18:58:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:00.585 18:58:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:00.585 18:58:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.585 18:58:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.585 18:58:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.585 18:58:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:00.844 18:58:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:00.844 18:58:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:00.844 18:58:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.844 18:58:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.844 18:58:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.844 18:58:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.844 18:58:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.413 18:58:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:01.413 18:58:16 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:01.413 Running I/O for 1 seconds... 00:38:02.349 13599.00 IOPS, 53.12 MiB/s 00:38:02.349 Latency(us) 00:38:02.349 [2024-12-14T18:58:18.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.349 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:02.349 nvme0n1 : 1.01 13644.64 53.30 0.00 0.00 9355.15 3932.16 14715.81 00:38:02.349 [2024-12-14T18:58:18.102Z] =================================================================================================================== 00:38:02.349 [2024-12-14T18:58:18.102Z] Total : 13644.64 53.30 0.00 0.00 9355.15 3932.16 14715.81 00:38:02.349 { 00:38:02.349 "results": [ 00:38:02.349 { 00:38:02.349 "job": "nvme0n1", 00:38:02.349 "core_mask": "0x2", 00:38:02.349 "workload": "randrw", 00:38:02.349 "percentage": 50, 00:38:02.349 "status": "finished", 00:38:02.349 "queue_depth": 128, 00:38:02.349 "io_size": 4096, 00:38:02.349 "runtime": 1.006109, 00:38:02.349 "iops": 13644.644864522632, 00:38:02.349 "mibps": 53.29939400204153, 00:38:02.349 "io_failed": 0, 00:38:02.349 "io_timeout": 0, 00:38:02.349 "avg_latency_us": 9355.14703962704, 00:38:02.349 "min_latency_us": 3932.16, 00:38:02.349 "max_latency_us": 14715.81090909091 00:38:02.349 } 00:38:02.349 ], 00:38:02.349 "core_count": 1 00:38:02.349 } 00:38:02.349 18:58:18 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:02.349 18:58:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:02.607 18:58:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:02.607 18:58:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.607 18:58:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.607 18:58:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.607 18:58:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.607 18:58:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.865 18:58:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:02.865 18:58:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:02.865 18:58:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:02.865 18:58:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.865 18:58:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.865 18:58:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.865 18:58:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:03.433 18:58:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:03.433 18:58:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.433 18:58:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:03.433 18:58:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:03.433 [2024-12-14 18:58:19.134469] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:03.433 [2024-12-14 18:58:19.134739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10337b0 (107): Transport endpoint is not connected 00:38:03.433 [2024-12-14 18:58:19.135731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10337b0 (9): Bad file descriptor 00:38:03.433 [2024-12-14 18:58:19.136728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:03.433 [2024-12-14 18:58:19.136751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:03.433 [2024-12-14 18:58:19.136760] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:03.433 [2024-12-14 18:58:19.136770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:03.433 2024/12/14 18:58:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:38:03.433 request: 00:38:03.433 { 00:38:03.433 "method": "bdev_nvme_attach_controller", 00:38:03.433 "params": { 00:38:03.433 "name": "nvme0", 00:38:03.433 "trtype": "tcp", 00:38:03.433 "traddr": "127.0.0.1", 00:38:03.433 "adrfam": "ipv4", 00:38:03.433 "trsvcid": "4420", 00:38:03.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.433 "prchk_reftag": false, 00:38:03.433 "prchk_guard": false, 00:38:03.433 "hdgst": false, 00:38:03.433 "ddgst": false, 00:38:03.433 "psk": "key1", 00:38:03.433 "allow_unrecognized_csi": false 00:38:03.433 } 00:38:03.433 } 00:38:03.433 Got JSON-RPC error response 00:38:03.433 GoRPCClient: error on JSON-RPC call 00:38:03.433 18:58:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:03.433 18:58:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:03.433 18:58:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:03.433 18:58:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:03.433 18:58:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:03.433 18:58:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:03.433 18:58:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:03.433 18:58:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:03.433 18:58:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:03.433 18:58:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.691 18:58:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:03.692 18:58:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:03.692 18:58:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:03.692 18:58:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:03.692 18:58:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:03.692 18:58:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.692 18:58:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:04.259 18:58:19 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:04.259 18:58:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:04.259 18:58:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:04.259 18:58:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:04.259 18:58:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:04.517 18:58:20 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:04.517 18:58:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.517 18:58:20 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:04.776 18:58:20 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:04.776 18:58:20 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.4g5GZ7TNkP 00:38:04.776 18:58:20 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:04.776 18:58:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:04.776 18:58:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:05.034 [2024-12-14 18:58:20.657191] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4g5GZ7TNkP': 0100660 00:38:05.034 [2024-12-14 18:58:20.657228] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:05.034 2024/12/14 18:58:20 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.4g5GZ7TNkP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:38:05.034 request: 00:38:05.034 { 00:38:05.034 "method": "keyring_file_add_key", 00:38:05.034 "params": { 00:38:05.034 "name": "key0", 00:38:05.034 "path": "/tmp/tmp.4g5GZ7TNkP" 00:38:05.034 } 00:38:05.034 } 00:38:05.034 Got JSON-RPC error response 00:38:05.034 GoRPCClient: error on JSON-RPC call 00:38:05.034 18:58:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:05.034 18:58:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:05.034 18:58:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:05.034 18:58:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:05.034 18:58:20 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.4g5GZ7TNkP 00:38:05.034 18:58:20 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:05.034 18:58:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4g5GZ7TNkP 00:38:05.293 18:58:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.4g5GZ7TNkP 00:38:05.293 18:58:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:05.293 18:58:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.293 18:58:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.293 18:58:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.293 18:58:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.293 18:58:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.860 18:58:21 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:05.860 18:58:21 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:05.860 18:58:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.860 18:58:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.860 [2024-12-14 18:58:21.593348] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4g5GZ7TNkP': No such file or directory 00:38:05.860 [2024-12-14 18:58:21.593383] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:05.860 [2024-12-14 18:58:21.593402] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:05.860 [2024-12-14 18:58:21.593410] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:05.860 [2024-12-14 18:58:21.593419] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:05.860 [2024-12-14 18:58:21.593427] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:05.860 2024/12/14 18:58:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:38:05.860 request: 00:38:05.860 { 00:38:05.860 "method": "bdev_nvme_attach_controller", 00:38:05.860 "params": { 00:38:05.860 "name": "nvme0", 00:38:05.860 "trtype": "tcp", 00:38:05.860 "traddr": "127.0.0.1", 00:38:05.860 "adrfam": "ipv4", 00:38:05.860 "trsvcid": "4420", 00:38:05.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:05.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:05.860 "prchk_reftag": false, 00:38:05.860 "prchk_guard": false, 00:38:05.860 "hdgst": false, 00:38:05.860 "ddgst": false, 00:38:05.860 "psk": "key0", 00:38:05.860 "allow_unrecognized_csi": false 00:38:05.860 } 00:38:05.860 } 00:38:05.860 Got JSON-RPC error response 00:38:05.860 GoRPCClient: error on JSON-RPC call 00:38:06.119 18:58:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:06.119 18:58:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:06.119 18:58:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:06.119 18:58:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:06.119 18:58:21 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:06.119 18:58:21 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.27iilZ3ozN 00:38:06.119 18:58:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:38:06.119 18:58:21 keyring_file -- nvmf/common.sh@729 -- # python - 00:38:06.378 18:58:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.27iilZ3ozN 00:38:06.378 18:58:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.27iilZ3ozN 00:38:06.378 18:58:21 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.27iilZ3ozN 00:38:06.378 18:58:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.27iilZ3ozN 00:38:06.378 18:58:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.27iilZ3ozN 00:38:06.378 18:58:22 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.378 18:58:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.636 nvme0n1 00:38:06.636 18:58:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:06.636 18:58:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.636 18:58:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:06.636 18:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.636 18:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:06.636 18:58:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.203 18:58:22 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:07.203 18:58:22 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:07.203 18:58:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:07.203 18:58:22 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:07.203 18:58:22 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:07.203 18:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.203 18:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.203 18:58:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.771 18:58:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:07.771 18:58:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.771 18:58:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:07.771 18:58:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:07.771 18:58:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:08.030 18:58:23 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:08.030 18:58:23 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:08.030 18:58:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.597 18:58:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:08.597 18:58:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.27iilZ3ozN 00:38:08.597 18:58:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.27iilZ3ozN 00:38:08.597 18:58:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bu61htUkWj 00:38:08.597 18:58:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bu61htUkWj 00:38:08.856 18:58:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:08.856 18:58:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.423 nvme0n1 00:38:09.423 18:58:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:09.423 18:58:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:09.682 18:58:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:09.682 "subsystems": [ 00:38:09.682 { 00:38:09.682 "subsystem": "keyring", 00:38:09.682 "config": [ 00:38:09.682 { 00:38:09.682 "method": "keyring_file_add_key", 00:38:09.682 "params": { 00:38:09.682 "name": "key0", 00:38:09.682 "path": "/tmp/tmp.27iilZ3ozN" 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "keyring_file_add_key", 00:38:09.682 "params": { 00:38:09.682 "name": "key1", 00:38:09.682 "path": "/tmp/tmp.bu61htUkWj" 00:38:09.682 } 00:38:09.682 } 00:38:09.682 ] 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "subsystem": "iobuf", 00:38:09.682 "config": [ 00:38:09.682 { 00:38:09.682 "method": "iobuf_set_options", 00:38:09.682 "params": { 00:38:09.682 "large_bufsize": 135168, 00:38:09.682 "large_pool_count": 1024, 00:38:09.682 "small_bufsize": 8192, 00:38:09.682 "small_pool_count": 8192 00:38:09.682 } 00:38:09.682 } 00:38:09.682 ] 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "subsystem": "sock", 00:38:09.682 "config": [ 00:38:09.682 { 00:38:09.682 "method": "sock_set_default_impl", 00:38:09.682 "params": { 00:38:09.682 "impl_name": "posix" 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "sock_impl_set_options", 00:38:09.682 "params": { 00:38:09.682 "enable_ktls": false, 00:38:09.682 "enable_placement_id": 0, 00:38:09.682 "enable_quickack": false, 00:38:09.682 "enable_recv_pipe": true, 00:38:09.682 "enable_zerocopy_send_client": false, 00:38:09.682 "enable_zerocopy_send_server": true, 00:38:09.682 "impl_name": "ssl", 00:38:09.682 "recv_buf_size": 4096, 00:38:09.682 "send_buf_size": 4096, 00:38:09.682 "tls_version": 0, 00:38:09.682 "zerocopy_threshold": 0 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "sock_impl_set_options", 00:38:09.682 "params": { 00:38:09.682 "enable_ktls": false, 00:38:09.682 "enable_placement_id": 0, 00:38:09.682 "enable_quickack": false, 00:38:09.682 "enable_recv_pipe": true, 00:38:09.682 "enable_zerocopy_send_client": false, 00:38:09.682 "enable_zerocopy_send_server": true, 00:38:09.682 "impl_name": "posix", 00:38:09.682 "recv_buf_size": 2097152, 00:38:09.682 "send_buf_size": 2097152, 00:38:09.682 "tls_version": 0, 00:38:09.682 "zerocopy_threshold": 0 00:38:09.682 } 00:38:09.682 } 00:38:09.682 ] 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "subsystem": "vmd", 00:38:09.682 "config": [] 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "subsystem": "accel", 00:38:09.682 "config": [ 00:38:09.682 { 00:38:09.682 "method": "accel_set_options", 00:38:09.682 "params": { 00:38:09.682 "buf_count": 2048, 00:38:09.682 "large_cache_size": 16, 00:38:09.682 "sequence_count": 2048, 00:38:09.682 "small_cache_size": 128, 00:38:09.682 "task_count": 2048 00:38:09.682 } 00:38:09.682 } 00:38:09.682 ] 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "subsystem": "bdev", 00:38:09.682 "config": [ 00:38:09.682 { 00:38:09.682 "method": "bdev_set_options", 00:38:09.682 "params": { 00:38:09.682 "bdev_auto_examine": true, 00:38:09.682 "bdev_io_cache_size": 256, 00:38:09.682 "bdev_io_pool_size": 65535, 00:38:09.682 "iobuf_large_cache_size": 16, 00:38:09.682 "iobuf_small_cache_size": 128 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "bdev_raid_set_options", 00:38:09.682 "params": { 00:38:09.682 "process_max_bandwidth_mb_sec": 0, 00:38:09.682 "process_window_size_kb": 1024 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "bdev_iscsi_set_options", 00:38:09.682 "params": { 00:38:09.682 "timeout_sec": 30 00:38:09.682 } 00:38:09.682 }, 00:38:09.682 { 00:38:09.682 "method": "bdev_nvme_set_options", 00:38:09.682 "params": { 00:38:09.682 "action_on_timeout": "none", 00:38:09.682 "allow_accel_sequence": false, 00:38:09.682 "arbitration_burst": 0, 00:38:09.682 "bdev_retry_count": 3, 00:38:09.682 "ctrlr_loss_timeout_sec": 0, 00:38:09.682 "delay_cmd_submit": true, 00:38:09.682 "dhchap_dhgroups": [ 00:38:09.682 "null", 00:38:09.682 "ffdhe2048", 00:38:09.682 "ffdhe3072", 00:38:09.682 "ffdhe4096", 00:38:09.682 "ffdhe6144", 00:38:09.682 "ffdhe8192" 00:38:09.682 ], 00:38:09.682 "dhchap_digests": [ 00:38:09.682 "sha256", 00:38:09.682 "sha384", 00:38:09.682 "sha512" 00:38:09.682 ], 00:38:09.682 "disable_auto_failback": false, 00:38:09.682 "fast_io_fail_timeout_sec": 0, 00:38:09.682 "generate_uuids": false, 00:38:09.682 "high_priority_weight": 0, 00:38:09.682 "io_path_stat": false, 00:38:09.682 "io_queue_requests": 512, 00:38:09.683 "keep_alive_timeout_ms": 10000, 00:38:09.683 "low_priority_weight": 0, 00:38:09.683 "medium_priority_weight": 0, 00:38:09.683 "nvme_adminq_poll_period_us": 10000, 00:38:09.683 "nvme_error_stat": false, 00:38:09.683 "nvme_ioq_poll_period_us": 0, 00:38:09.683 "rdma_cm_event_timeout_ms": 0, 00:38:09.683 "rdma_max_cq_size": 0, 00:38:09.683 "rdma_srq_size": 0, 00:38:09.683 "reconnect_delay_sec": 0, 00:38:09.683 "timeout_admin_us": 0, 00:38:09.683 "timeout_us": 0, 00:38:09.683 "transport_ack_timeout": 0, 00:38:09.683 "transport_retry_count": 4, 00:38:09.683 "transport_tos": 0 00:38:09.683 } 00:38:09.683 }, 00:38:09.683 { 00:38:09.683 "method": "bdev_nvme_attach_controller", 00:38:09.683 "params": { 00:38:09.683 "adrfam": "IPv4", 00:38:09.683 "ctrlr_loss_timeout_sec": 0, 00:38:09.683 "ddgst": false, 00:38:09.683 "fast_io_fail_timeout_sec": 0, 00:38:09.683 "hdgst": false, 00:38:09.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.683 "name": "nvme0", 00:38:09.683 "prchk_guard": false, 00:38:09.683 "prchk_reftag": false, 00:38:09.683 "psk": "key0", 00:38:09.683 "reconnect_delay_sec": 0, 00:38:09.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.683 "traddr": "127.0.0.1", 00:38:09.683 "trsvcid": "4420", 00:38:09.683 "trtype": "TCP" 00:38:09.683 } 00:38:09.683 }, 00:38:09.683 { 00:38:09.683 "method": "bdev_nvme_set_hotplug", 00:38:09.683 "params": { 00:38:09.683 "enable": false, 00:38:09.683 "period_us": 100000 00:38:09.683 } 00:38:09.683 }, 00:38:09.683 { 00:38:09.683 "method": "bdev_wait_for_examine" 00:38:09.683 } 00:38:09.683 ] 00:38:09.683 }, 00:38:09.683 { 00:38:09.683 "subsystem": "nbd", 00:38:09.683 "config": [] 00:38:09.683 } 00:38:09.683 ] 00:38:09.683 }' 00:38:09.683 18:58:25 keyring_file -- keyring/file.sh@115 -- # killprocess 131103 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 131103 ']' 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 131103 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131103 00:38:09.683 killing process with pid 131103 00:38:09.683 Received shutdown signal, test time was about 1.000000 seconds 00:38:09.683 00:38:09.683 Latency(us) 00:38:09.683 [2024-12-14T18:58:25.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.683 [2024-12-14T18:58:25.436Z] =================================================================================================================== 00:38:09.683 [2024-12-14T18:58:25.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131103' 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@969 -- # kill 131103 00:38:09.683 18:58:25 keyring_file -- common/autotest_common.sh@974 -- # wait 131103 00:38:09.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:09.942 18:58:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=131558 00:38:09.942 18:58:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 131558 /var/tmp/bperf.sock 00:38:09.942 18:58:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 131558 ']' 00:38:09.942 18:58:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:09.942 18:58:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:09.942 18:58:25 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:09.942 18:58:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:09.942 18:58:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:09.942 18:58:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:09.942 "subsystems": [ 00:38:09.942 { 00:38:09.942 "subsystem": "keyring", 00:38:09.942 "config": [ 00:38:09.942 { 00:38:09.942 "method": "keyring_file_add_key", 00:38:09.942 "params": { 00:38:09.942 "name": "key0", 00:38:09.942 "path": "/tmp/tmp.27iilZ3ozN" 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "keyring_file_add_key", 00:38:09.942 "params": { 00:38:09.942 "name": "key1", 00:38:09.942 "path": "/tmp/tmp.bu61htUkWj" 00:38:09.942 } 00:38:09.942 } 00:38:09.942 ] 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "subsystem": "iobuf", 00:38:09.942 "config": [ 00:38:09.942 { 00:38:09.942 "method": "iobuf_set_options", 00:38:09.942 "params": { 00:38:09.942 "large_bufsize": 135168, 00:38:09.942 "large_pool_count": 1024, 00:38:09.942 "small_bufsize": 8192, 00:38:09.942 "small_pool_count": 8192 00:38:09.942 } 00:38:09.942 } 00:38:09.942 ] 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "subsystem": "sock", 00:38:09.942 "config": [ 00:38:09.942 { 00:38:09.942 "method": "sock_set_default_impl", 00:38:09.942 "params": { 00:38:09.942 "impl_name": "posix" 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "sock_impl_set_options", 00:38:09.942 "params": { 00:38:09.942 "enable_ktls": false, 00:38:09.942 "enable_placement_id": 0, 00:38:09.942 "enable_quickack": false, 00:38:09.942 "enable_recv_pipe": true, 00:38:09.942 "enable_zerocopy_send_client": false, 00:38:09.942 "enable_zerocopy_send_server": true, 00:38:09.942 "impl_name": "ssl", 00:38:09.942 "recv_buf_size": 4096, 00:38:09.942 "send_buf_size": 4096, 00:38:09.942 "tls_version": 0, 00:38:09.942 "zerocopy_threshold": 0 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "sock_impl_set_options", 00:38:09.942 "params": { 00:38:09.942 "enable_ktls": false, 00:38:09.942 "enable_placement_id": 0, 00:38:09.942 "enable_quickack": false, 00:38:09.942 "enable_recv_pipe": true, 00:38:09.942 "enable_zerocopy_send_client": false, 00:38:09.942 "enable_zerocopy_send_server": true, 00:38:09.942 "impl_name": "posix", 00:38:09.942 "recv_buf_size": 2097152, 00:38:09.942 "send_buf_size": 2097152, 00:38:09.942 "tls_version": 0, 00:38:09.942 "zerocopy_threshold": 0 00:38:09.942 } 00:38:09.942 } 00:38:09.942 ] 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "subsystem": "vmd", 00:38:09.942 "config": [] 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "subsystem": "accel", 00:38:09.942 "config": [ 00:38:09.942 { 00:38:09.942 "method": "accel_set_options", 00:38:09.942 "params": { 00:38:09.942 "buf_count": 2048, 00:38:09.942 "large_cache_size": 16, 00:38:09.942 "sequence_count": 2048, 00:38:09.942 "small_cache_size": 128, 00:38:09.942 "task_count": 2048 00:38:09.942 } 00:38:09.942 } 00:38:09.942 ] 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "subsystem": "bdev", 00:38:09.942 "config": [ 00:38:09.942 { 00:38:09.942 "method": "bdev_set_options", 00:38:09.942 "params": { 00:38:09.942 "bdev_auto_examine": true, 00:38:09.942 "bdev_io_cache_size": 256, 00:38:09.942 "bdev_io_pool_size": 65535, 00:38:09.942 "iobuf_large_cache_size": 16, 00:38:09.942 "iobuf_small_cache_size": 128 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "bdev_raid_set_options", 00:38:09.942 "params": { 00:38:09.942 "process_max_bandwidth_mb_sec": 0, 00:38:09.942 "process_window_size_kb": 1024 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "bdev_iscsi_set_options", 00:38:09.942 "params": { 00:38:09.942 "timeout_sec": 30 00:38:09.942 } 00:38:09.942 }, 00:38:09.942 { 00:38:09.942 "method": "bdev_nvme_set_options", 00:38:09.942 "params": { 00:38:09.942 "action_on_timeout": "none", 00:38:09.942 "allow_accel_sequence": false, 00:38:09.942 "arbitration_burst": 0, 00:38:09.942 "bdev_retry_count": 3, 00:38:09.942 "ctrlr_loss_timeout_sec": 0, 00:38:09.942 "delay_cmd_submit": true, 00:38:09.942 "dhchap_dhgroups": [ 00:38:09.942 "null", 00:38:09.942 "ffdhe2048", 00:38:09.942 "ffdhe3072", 00:38:09.942 "ffdhe4096", 00:38:09.942 "ffdhe6144", 00:38:09.942 "ffdhe8192" 00:38:09.942 ], 00:38:09.942 "dhchap_digests": [ 00:38:09.942 "sha256", 00:38:09.942 "sha384", 00:38:09.942 "sha512" 00:38:09.942 ], 00:38:09.942 "disable_auto_failback": false, 00:38:09.942 "fast_io_fail_timeout_sec": 0, 00:38:09.942 "generate_uuids": false, 00:38:09.942 "high_priority_weight": 0, 00:38:09.942 "io_path_stat": false, 00:38:09.942 "io_queue_requests": 512, 00:38:09.942 "keep_alive_timeout_ms": 10000, 00:38:09.942 "low_priority_weight": 0, 00:38:09.942 "medium_priority_weight": 0, 00:38:09.942 "nvme_adminq_poll_period_us": 10000, 00:38:09.942 "nvme_error_stat": false, 00:38:09.942 "nvme_ioq_poll_period_us": 0, 00:38:09.942 "rdma_cm_event_timeout_ms": 0, 00:38:09.942 "rdma_max_cq_size": 0, 00:38:09.942 "rdma_srq_size": 0, 00:38:09.943 "reconnect_delay_sec": 0, 00:38:09.943 "timeout_admin_us": 0, 00:38:09.943 "timeout_us": 0, 00:38:09.943 "transport_ack_timeout": 0, 00:38:09.943 "transport_retry_count": 4, 00:38:09.943 "transport_tos": 0 00:38:09.943 } 00:38:09.943 }, 00:38:09.943 { 00:38:09.943 "method": "bdev_nvme_attach_controller", 00:38:09.943 "params": { 00:38:09.943 "adrfam": "IPv4", 00:38:09.943 "ctrlr_loss_timeout_sec": 0, 00:38:09.943 "ddgst": false, 00:38:09.943 "fast_io_fail_timeout_sec": 0, 00:38:09.943 "hdgst": false, 00:38:09.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.943 "name": "nvme0", 00:38:09.943 "prchk_guard": false, 00:38:09.943 "prchk_reftag": false, 00:38:09.943 "psk": "key0", 00:38:09.943 "reconnect_delay_sec": 0, 00:38:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.943 "traddr": "127.0.0.1", 00:38:09.943 "trsvcid": "4420", 00:38:09.943 "trtype": "TCP" 00:38:09.943 } 00:38:09.943 }, 00:38:09.943 { 00:38:09.943 "method": "bdev_nvme_set_hotplug", 00:38:09.943 "params": { 00:38:09.943 "enable": false, 00:38:09.943 "period_us": 100000 00:38:09.943 } 00:38:09.943 }, 00:38:09.943 { 00:38:09.943 "method": "bdev_wait_for_examine" 00:38:09.943 } 00:38:09.943 ] 00:38:09.943 }, 00:38:09.943 { 00:38:09.943 "subsystem": "nbd", 00:38:09.943 "config": [] 00:38:09.943 } 00:38:09.943 ] 00:38:09.943 }' 00:38:09.943 18:58:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:09.943 [2024-12-14 18:58:25.537687] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:09.943 [2024-12-14 18:58:25.537984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131558 ] 00:38:09.943 [2024-12-14 18:58:25.676651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.201 [2024-12-14 18:58:25.730223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.201 [2024-12-14 18:58:25.933092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:11.137 18:58:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:11.137 18:58:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:11.137 18:58:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:11.137 18:58:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:11.137 18:58:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.137 18:58:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:11.137 18:58:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:11.137 18:58:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.137 18:58:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:11.138 18:58:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.138 18:58:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.138 18:58:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.396 18:58:27 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:11.396 18:58:27 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:11.396 18:58:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:11.396 18:58:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.396 18:58:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.396 18:58:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.396 18:58:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:11.963 18:58:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:11.963 18:58:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:11.963 18:58:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:11.963 18:58:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:12.222 18:58:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:12.222 18:58:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:12.222 18:58:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.27iilZ3ozN /tmp/tmp.bu61htUkWj 00:38:12.222 18:58:27 keyring_file -- keyring/file.sh@20 -- # killprocess 131558 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 131558 ']' 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 131558 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131558 00:38:12.222 killing process with pid 131558 00:38:12.222 Received shutdown signal, test time was about 1.000000 seconds 00:38:12.222 00:38:12.222 Latency(us) 00:38:12.222 [2024-12-14T18:58:27.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.222 [2024-12-14T18:58:27.975Z] =================================================================================================================== 00:38:12.222 [2024-12-14T18:58:27.975Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131558' 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@969 -- # kill 131558 00:38:12.222 18:58:27 keyring_file -- common/autotest_common.sh@974 -- # wait 131558 00:38:12.481 18:58:28 keyring_file -- keyring/file.sh@21 -- # killprocess 131080 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 131080 ']' 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 131080 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131080 00:38:12.481 killing process with pid 131080 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131080' 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@969 -- # kill 131080 00:38:12.481 18:58:28 keyring_file -- common/autotest_common.sh@974 -- # wait 131080 00:38:12.740 00:38:12.740 real 0m15.437s 00:38:12.740 user 0m38.767s 00:38:12.740 sys 0m3.350s 00:38:12.740 18:58:28 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:12.740 18:58:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 ************************************ 00:38:12.740 END TEST keyring_file 00:38:12.740 ************************************ 00:38:12.740 18:58:28 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:12.740 18:58:28 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:38:12.740 18:58:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:12.740 18:58:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:12.740 18:58:28 -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 ************************************ 00:38:12.740 START TEST keyring_linux 00:38:12.740 ************************************ 00:38:12.740 18:58:28 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:38:13.001 Joined session keyring: 123444058 00:38:13.001 * Looking for test storage... 00:38:13.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.001 --rc genhtml_branch_coverage=1 00:38:13.001 --rc genhtml_function_coverage=1 00:38:13.001 --rc genhtml_legend=1 00:38:13.001 --rc geninfo_all_blocks=1 00:38:13.001 --rc geninfo_unexecuted_blocks=1 00:38:13.001 00:38:13.001 ' 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.001 --rc genhtml_branch_coverage=1 00:38:13.001 --rc genhtml_function_coverage=1 00:38:13.001 --rc genhtml_legend=1 00:38:13.001 --rc geninfo_all_blocks=1 00:38:13.001 --rc geninfo_unexecuted_blocks=1 00:38:13.001 00:38:13.001 ' 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.001 --rc genhtml_branch_coverage=1 00:38:13.001 --rc genhtml_function_coverage=1 00:38:13.001 --rc genhtml_legend=1 00:38:13.001 --rc geninfo_all_blocks=1 00:38:13.001 --rc geninfo_unexecuted_blocks=1 00:38:13.001 00:38:13.001 ' 00:38:13.001 18:58:28 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.001 --rc genhtml_branch_coverage=1 00:38:13.001 --rc genhtml_function_coverage=1 00:38:13.001 --rc genhtml_legend=1 00:38:13.001 --rc geninfo_all_blocks=1 00:38:13.001 --rc geninfo_unexecuted_blocks=1 00:38:13.001 00:38:13.001 ' 00:38:13.001 18:58:28 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:38:13.001 18:58:28 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81b74ad8-8517-4157-88ee-39db73f4d0b6 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=81b74ad8-8517-4157-88ee-39db73f4d0b6 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:13.001 18:58:28 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:13.001 18:58:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:13.002 18:58:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.002 18:58:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.002 18:58:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.002 18:58:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.002 18:58:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.002 18:58:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:13.002 18:58:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:13.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@729 -- # python - 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:13.002 /tmp/:spdk-test:key0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:13.002 18:58:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:13.002 18:58:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:38:13.002 18:58:28 keyring_linux -- nvmf/common.sh@729 -- # python - 00:38:13.276 18:58:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:13.276 /tmp/:spdk-test:key1 00:38:13.276 18:58:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:13.276 18:58:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=131716 00:38:13.276 18:58:28 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:13.276 18:58:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 131716 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 131716 ']' 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.276 18:58:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:13.276 [2024-12-14 18:58:28.874944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:13.276 [2024-12-14 18:58:28.875049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131716 ] 00:38:13.276 [2024-12-14 18:58:29.014164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.550 [2024-12-14 18:58:29.093171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.116 18:58:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:14.116 18:58:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:14.116 18:58:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:14.116 18:58:29 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.116 18:58:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:14.116 [2024-12-14 18:58:29.863024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.375 null0 00:38:14.375 [2024-12-14 18:58:29.894985] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:14.375 [2024-12-14 18:58:29.895190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.375 18:58:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:14.375 833936336 00:38:14.375 18:58:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:14.375 807731591 00:38:14.375 18:58:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=131752 00:38:14.375 18:58:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 131752 /var/tmp/bperf.sock 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 131752 ']' 00:38:14.375 18:58:29 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:14.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:14.375 18:58:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:14.375 [2024-12-14 18:58:29.986977] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:14.375 [2024-12-14 18:58:29.987063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131752 ] 00:38:14.634 [2024-12-14 18:58:30.129203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.634 [2024-12-14 18:58:30.206341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.200 18:58:30 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:15.200 18:58:30 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:15.200 18:58:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:15.200 18:58:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:15.459 18:58:31 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:15.459 18:58:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:16.027 18:58:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:16.027 18:58:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:16.027 [2024-12-14 18:58:31.691031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:16.027 nvme0n1 00:38:16.286 18:58:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:16.286 18:58:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:16.286 18:58:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:16.286 18:58:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:16.286 18:58:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:16.286 18:58:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.545 18:58:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:16.545 18:58:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:16.545 18:58:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:16.545 18:58:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:16.545 18:58:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:16.545 18:58:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.545 18:58:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:16.803 18:58:32 keyring_linux -- keyring/linux.sh@25 -- # sn=833936336 00:38:16.803 18:58:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:16.804 18:58:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:16.804 18:58:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 833936336 == \8\3\3\9\3\6\3\3\6 ]] 00:38:16.804 18:58:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 833936336 00:38:16.804 18:58:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:16.804 18:58:32 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:16.804 Running I/O for 1 seconds... 00:38:17.739 11427.00 IOPS, 44.64 MiB/s 00:38:17.739 Latency(us) 00:38:17.739 [2024-12-14T18:58:33.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.739 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:17.739 nvme0n1 : 1.01 11430.14 44.65 0.00 0.00 11132.96 8519.68 18588.39 00:38:17.739 [2024-12-14T18:58:33.492Z] =================================================================================================================== 00:38:17.739 [2024-12-14T18:58:33.492Z] Total : 11430.14 44.65 0.00 0.00 11132.96 8519.68 18588.39 00:38:17.739 { 00:38:17.739 "results": [ 00:38:17.739 { 00:38:17.739 "job": "nvme0n1", 00:38:17.739 "core_mask": "0x2", 00:38:17.739 "workload": "randread", 00:38:17.739 "status": "finished", 00:38:17.739 "queue_depth": 128, 00:38:17.739 "io_size": 4096, 00:38:17.739 "runtime": 1.010924, 00:38:17.739 "iops": 11430.137181430058, 00:38:17.739 "mibps": 44.648973364961165, 00:38:17.739 "io_failed": 0, 00:38:17.739 "io_timeout": 0, 00:38:17.739 "avg_latency_us": 11132.960547578774, 00:38:17.739 "min_latency_us": 8519.68, 00:38:17.739 "max_latency_us": 18588.392727272727 00:38:17.739 } 00:38:17.739 ], 00:38:17.739 "core_count": 1 00:38:17.740 } 00:38:17.740 18:58:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:17.740 18:58:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:18.307 18:58:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:18.307 18:58:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:18.307 18:58:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:18.307 18:58:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:18.307 18:58:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.307 18:58:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:18.307 18:58:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:18.307 18:58:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:18.307 18:58:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:18.307 18:58:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.307 18:58:34 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.307 18:58:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.875 [2024-12-14 18:58:34.328552] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:18.875 [2024-12-14 18:58:34.329541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b9710 (107): Transport endpoint is not connected 00:38:18.875 [2024-12-14 18:58:34.330533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b9710 (9): Bad file descriptor 00:38:18.875 [2024-12-14 18:58:34.331531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:18.875 [2024-12-14 18:58:34.331571] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:18.875 [2024-12-14 18:58:34.331581] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:18.875 [2024-12-14 18:58:34.331591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:18.875 2024/12/14 18:58:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:38:18.875 request: 00:38:18.875 { 00:38:18.875 "method": "bdev_nvme_attach_controller", 00:38:18.875 "params": { 00:38:18.875 "name": "nvme0", 00:38:18.875 "trtype": "tcp", 00:38:18.875 "traddr": "127.0.0.1", 00:38:18.875 "adrfam": "ipv4", 00:38:18.875 "trsvcid": "4420", 00:38:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:18.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:18.875 "prchk_reftag": false, 00:38:18.875 "prchk_guard": false, 00:38:18.875 "hdgst": false, 00:38:18.875 "ddgst": false, 00:38:18.875 "psk": ":spdk-test:key1", 00:38:18.875 "allow_unrecognized_csi": false 00:38:18.875 } 00:38:18.875 } 00:38:18.875 Got JSON-RPC error response 00:38:18.875 GoRPCClient: error on JSON-RPC call 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@33 -- # sn=833936336 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 833936336 00:38:18.875 1 links removed 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@33 -- # sn=807731591 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 807731591 00:38:18.875 1 links removed 00:38:18.875 18:58:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 131752 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 131752 ']' 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 131752 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131752 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:18.875 killing process with pid 131752 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131752' 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 131752 00:38:18.875 Received shutdown signal, test time was about 1.000000 seconds 00:38:18.875 00:38:18.875 Latency(us) 00:38:18.875 [2024-12-14T18:58:34.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.875 [2024-12-14T18:58:34.628Z] =================================================================================================================== 00:38:18.875 [2024-12-14T18:58:34.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:18.875 18:58:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 131752 00:38:19.134 18:58:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 131716 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 131716 ']' 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 131716 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131716 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:19.134 killing process with pid 131716 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131716' 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 131716 00:38:19.134 18:58:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 131716 00:38:19.711 00:38:19.711 real 0m6.729s 00:38:19.711 user 0m12.713s 00:38:19.711 sys 0m1.764s 00:38:19.711 18:58:35 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:19.711 ************************************ 00:38:19.711 END TEST keyring_linux 00:38:19.711 ************************************ 00:38:19.711 18:58:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:19.711 18:58:35 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:19.711 18:58:35 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:19.711 18:58:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:19.711 18:58:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:19.711 18:58:35 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:19.711 18:58:35 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:19.711 18:58:35 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:19.711 18:58:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:19.711 18:58:35 -- common/autotest_common.sh@10 -- # set +x 00:38:19.711 18:58:35 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:19.711 18:58:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:19.711 18:58:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:19.711 18:58:35 -- common/autotest_common.sh@10 -- # set +x 00:38:21.612 INFO: APP EXITING 00:38:21.612 INFO: killing all VMs 00:38:21.612 INFO: killing vhost app 00:38:21.612 INFO: EXIT DONE 00:38:22.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:22.438 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:22.438 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:23.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:23.005 Cleaning 00:38:23.005 Removing: /var/run/dpdk/spdk0/config 00:38:23.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:23.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:23.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:23.005 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:23.005 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:23.264 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:23.264 Removing: /var/run/dpdk/spdk1/config 00:38:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:23.264 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:23.264 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:23.264 Removing: /var/run/dpdk/spdk2/config 00:38:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:23.264 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:23.264 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:23.264 Removing: /var/run/dpdk/spdk3/config 00:38:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:23.264 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:23.264 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:23.264 Removing: /var/run/dpdk/spdk4/config 00:38:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:23.264 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:23.264 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:23.264 Removing: /dev/shm/nvmf_trace.0 00:38:23.264 Removing: /dev/shm/spdk_tgt_trace.pid71551 00:38:23.264 Removing: /var/run/dpdk/spdk0 00:38:23.265 Removing: /var/run/dpdk/spdk1 00:38:23.265 Removing: /var/run/dpdk/spdk2 00:38:23.265 Removing: /var/run/dpdk/spdk3 00:38:23.265 Removing: /var/run/dpdk/spdk4 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100457 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100561 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100720 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100759 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100817 00:38:23.265 Removing: /var/run/dpdk/spdk_pid100862 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101046 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101202 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101493 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101609 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101872 00:38:23.265 Removing: /var/run/dpdk/spdk_pid101984 00:38:23.265 Removing: /var/run/dpdk/spdk_pid102120 00:38:23.265 Removing: /var/run/dpdk/spdk_pid102515 00:38:23.265 Removing: /var/run/dpdk/spdk_pid102955 00:38:23.265 Removing: /var/run/dpdk/spdk_pid102956 00:38:23.265 Removing: /var/run/dpdk/spdk_pid102957 00:38:23.265 Removing: /var/run/dpdk/spdk_pid103229 00:38:23.265 Removing: /var/run/dpdk/spdk_pid103490 00:38:23.265 Removing: /var/run/dpdk/spdk_pid103493 00:38:23.265 Removing: /var/run/dpdk/spdk_pid105854 00:38:23.265 Removing: /var/run/dpdk/spdk_pid106224 00:38:23.265 Removing: /var/run/dpdk/spdk_pid106814 00:38:23.265 Removing: /var/run/dpdk/spdk_pid106826 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107217 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107237 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107251 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107284 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107289 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107443 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107445 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107548 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107555 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107659 00:38:23.265 Removing: /var/run/dpdk/spdk_pid107662 00:38:23.524 Removing: /var/run/dpdk/spdk_pid108196 00:38:23.524 Removing: /var/run/dpdk/spdk_pid108239 00:38:23.524 Removing: /var/run/dpdk/spdk_pid108398 00:38:23.524 Removing: /var/run/dpdk/spdk_pid108520 00:38:23.524 Removing: /var/run/dpdk/spdk_pid108952 00:38:23.524 Removing: /var/run/dpdk/spdk_pid109198 00:38:23.524 Removing: /var/run/dpdk/spdk_pid109735 00:38:23.524 Removing: /var/run/dpdk/spdk_pid110368 00:38:23.524 Removing: /var/run/dpdk/spdk_pid111805 00:38:23.524 Removing: /var/run/dpdk/spdk_pid112441 00:38:23.524 Removing: /var/run/dpdk/spdk_pid112449 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114485 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114580 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114666 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114752 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114906 00:38:23.524 Removing: /var/run/dpdk/spdk_pid114974 00:38:23.524 Removing: /var/run/dpdk/spdk_pid115051 00:38:23.524 Removing: /var/run/dpdk/spdk_pid115128 00:38:23.524 Removing: /var/run/dpdk/spdk_pid115510 00:38:23.524 Removing: /var/run/dpdk/spdk_pid116271 00:38:23.524 Removing: /var/run/dpdk/spdk_pid117655 00:38:23.524 Removing: /var/run/dpdk/spdk_pid117833 00:38:23.524 Removing: /var/run/dpdk/spdk_pid118119 00:38:23.524 Removing: /var/run/dpdk/spdk_pid118668 00:38:23.524 Removing: /var/run/dpdk/spdk_pid119059 00:38:23.524 Removing: /var/run/dpdk/spdk_pid121504 00:38:23.524 Removing: /var/run/dpdk/spdk_pid121544 00:38:23.524 Removing: /var/run/dpdk/spdk_pid121887 00:38:23.524 Removing: /var/run/dpdk/spdk_pid121926 00:38:23.524 Removing: /var/run/dpdk/spdk_pid122330 00:38:23.524 Removing: /var/run/dpdk/spdk_pid122884 00:38:23.524 Removing: /var/run/dpdk/spdk_pid123325 00:38:23.524 Removing: /var/run/dpdk/spdk_pid124397 00:38:23.524 Removing: /var/run/dpdk/spdk_pid125436 00:38:23.524 Removing: /var/run/dpdk/spdk_pid125548 00:38:23.524 Removing: /var/run/dpdk/spdk_pid125606 00:38:23.524 Removing: /var/run/dpdk/spdk_pid127185 00:38:23.524 Removing: /var/run/dpdk/spdk_pid127496 00:38:23.524 Removing: /var/run/dpdk/spdk_pid127834 00:38:23.524 Removing: /var/run/dpdk/spdk_pid128404 00:38:23.524 Removing: /var/run/dpdk/spdk_pid128412 00:38:23.524 Removing: /var/run/dpdk/spdk_pid128801 00:38:23.524 Removing: /var/run/dpdk/spdk_pid128961 00:38:23.524 Removing: /var/run/dpdk/spdk_pid129113 00:38:23.524 Removing: /var/run/dpdk/spdk_pid129206 00:38:23.524 Removing: /var/run/dpdk/spdk_pid129407 00:38:23.524 Removing: /var/run/dpdk/spdk_pid129516 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130236 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130266 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130301 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130552 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130582 00:38:23.524 Removing: /var/run/dpdk/spdk_pid130616 00:38:23.524 Removing: /var/run/dpdk/spdk_pid131080 00:38:23.524 Removing: /var/run/dpdk/spdk_pid131103 00:38:23.524 Removing: /var/run/dpdk/spdk_pid131558 00:38:23.524 Removing: /var/run/dpdk/spdk_pid131716 00:38:23.524 Removing: /var/run/dpdk/spdk_pid131752 00:38:23.524 Removing: /var/run/dpdk/spdk_pid71393 00:38:23.524 Removing: /var/run/dpdk/spdk_pid71551 00:38:23.524 Removing: /var/run/dpdk/spdk_pid71812 00:38:23.524 Removing: /var/run/dpdk/spdk_pid71905 00:38:23.524 Removing: /var/run/dpdk/spdk_pid71944 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72059 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72076 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72215 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72495 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72679 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72775 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72875 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72965 00:38:23.524 Removing: /var/run/dpdk/spdk_pid72998 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73033 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73103 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73226 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73861 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73927 00:38:23.524 Removing: /var/run/dpdk/spdk_pid73977 00:38:23.524 Removing: /var/run/dpdk/spdk_pid74006 00:38:23.524 Removing: /var/run/dpdk/spdk_pid74095 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74123 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74202 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74230 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74287 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74304 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74355 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74385 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74545 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74581 00:38:23.783 Removing: /var/run/dpdk/spdk_pid74663 00:38:23.783 Removing: /var/run/dpdk/spdk_pid75160 00:38:23.783 Removing: /var/run/dpdk/spdk_pid75571 00:38:23.783 Removing: /var/run/dpdk/spdk_pid78065 00:38:23.783 Removing: /var/run/dpdk/spdk_pid78105 00:38:23.783 Removing: /var/run/dpdk/spdk_pid78477 00:38:23.783 Removing: /var/run/dpdk/spdk_pid78523 00:38:23.783 Removing: /var/run/dpdk/spdk_pid78929 00:38:23.783 Removing: /var/run/dpdk/spdk_pid79521 00:38:23.783 Removing: /var/run/dpdk/spdk_pid79954 00:38:23.783 Removing: /var/run/dpdk/spdk_pid80986 00:38:23.783 Removing: /var/run/dpdk/spdk_pid82071 00:38:23.783 Removing: /var/run/dpdk/spdk_pid82195 00:38:23.783 Removing: /var/run/dpdk/spdk_pid82262 00:38:23.783 Removing: /var/run/dpdk/spdk_pid83882 00:38:23.783 Removing: /var/run/dpdk/spdk_pid84242 00:38:23.783 Removing: /var/run/dpdk/spdk_pid91506 00:38:23.783 Removing: /var/run/dpdk/spdk_pid91951 00:38:23.783 Removing: /var/run/dpdk/spdk_pid92585 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93002 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93009 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93064 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93125 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93180 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93224 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93230 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93257 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93294 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93296 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93360 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93417 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93473 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93511 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93519 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93544 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93827 00:38:23.783 Removing: /var/run/dpdk/spdk_pid93981 00:38:23.783 Removing: /var/run/dpdk/spdk_pid94219 00:38:23.783 Removing: /var/run/dpdk/spdk_pid99951 00:38:23.783 Clean 00:38:23.783 18:58:39 -- common/autotest_common.sh@1451 -- # return 0 00:38:23.783 18:58:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:23.783 18:58:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.783 18:58:39 -- common/autotest_common.sh@10 -- # set +x 00:38:23.783 18:58:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:23.783 18:58:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.783 18:58:39 -- common/autotest_common.sh@10 -- # set +x 00:38:24.042 18:58:39 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:24.042 18:58:39 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:24.042 18:58:39 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:24.042 18:58:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:24.042 18:58:39 -- spdk/autotest.sh@394 -- # hostname 00:38:24.042 18:58:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:24.300 geninfo: WARNING: invalid characters removed from testname! 00:38:50.843 18:59:02 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:50.843 18:59:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:52.746 18:59:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:55.279 18:59:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:57.810 18:59:13 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:00.369 18:59:15 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:02.903 18:59:18 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:02.903 18:59:18 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:39:02.903 18:59:18 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:39:02.903 18:59:18 -- common/autotest_common.sh@1681 -- $ lcov --version 00:39:02.903 18:59:18 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:39:02.903 18:59:18 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:39:02.903 18:59:18 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:39:02.903 18:59:18 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:39:02.903 18:59:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:39:02.903 18:59:18 -- scripts/common.sh@336 -- $ read -ra ver1 00:39:02.903 18:59:18 -- scripts/common.sh@337 -- $ IFS=.-: 00:39:02.903 18:59:18 -- scripts/common.sh@337 -- $ read -ra ver2 00:39:02.903 18:59:18 -- scripts/common.sh@338 -- $ local 'op=<' 00:39:02.903 18:59:18 -- scripts/common.sh@340 -- $ ver1_l=2 00:39:02.903 18:59:18 -- scripts/common.sh@341 -- $ ver2_l=1 00:39:02.903 18:59:18 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:39:02.903 18:59:18 -- scripts/common.sh@344 -- $ case "$op" in 00:39:02.903 18:59:18 -- scripts/common.sh@345 -- $ : 1 00:39:02.903 18:59:18 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:39:02.903 18:59:18 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:02.903 18:59:18 -- scripts/common.sh@365 -- $ decimal 1 00:39:02.903 18:59:18 -- scripts/common.sh@353 -- $ local d=1 00:39:02.903 18:59:18 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:39:02.903 18:59:18 -- scripts/common.sh@355 -- $ echo 1 00:39:02.903 18:59:18 -- scripts/common.sh@365 -- $ ver1[v]=1 00:39:02.903 18:59:18 -- scripts/common.sh@366 -- $ decimal 2 00:39:02.903 18:59:18 -- scripts/common.sh@353 -- $ local d=2 00:39:02.903 18:59:18 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:39:02.903 18:59:18 -- scripts/common.sh@355 -- $ echo 2 00:39:02.903 18:59:18 -- scripts/common.sh@366 -- $ ver2[v]=2 00:39:02.903 18:59:18 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:39:02.903 18:59:18 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:39:02.903 18:59:18 -- scripts/common.sh@368 -- $ return 0 00:39:02.903 18:59:18 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:02.903 18:59:18 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:39:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.903 --rc genhtml_branch_coverage=1 00:39:02.903 --rc genhtml_function_coverage=1 00:39:02.903 --rc genhtml_legend=1 00:39:02.903 --rc geninfo_all_blocks=1 00:39:02.903 --rc geninfo_unexecuted_blocks=1 00:39:02.903 00:39:02.903 ' 00:39:02.903 18:59:18 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:39:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.903 --rc genhtml_branch_coverage=1 00:39:02.903 --rc genhtml_function_coverage=1 00:39:02.903 --rc genhtml_legend=1 00:39:02.903 --rc geninfo_all_blocks=1 00:39:02.903 --rc geninfo_unexecuted_blocks=1 00:39:02.903 00:39:02.903 ' 00:39:02.903 18:59:18 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:39:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.903 --rc genhtml_branch_coverage=1 00:39:02.903 --rc genhtml_function_coverage=1 00:39:02.903 --rc genhtml_legend=1 00:39:02.903 --rc geninfo_all_blocks=1 00:39:02.903 --rc geninfo_unexecuted_blocks=1 00:39:02.903 00:39:02.903 ' 00:39:02.903 18:59:18 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:39:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.903 --rc genhtml_branch_coverage=1 00:39:02.903 --rc genhtml_function_coverage=1 00:39:02.903 --rc genhtml_legend=1 00:39:02.903 --rc geninfo_all_blocks=1 00:39:02.903 --rc geninfo_unexecuted_blocks=1 00:39:02.903 00:39:02.903 ' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:02.903 18:59:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:39:02.903 18:59:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:02.903 18:59:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.903 18:59:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.903 18:59:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.903 18:59:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.903 18:59:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.903 18:59:18 -- paths/export.sh@5 -- $ export PATH 00:39:02.903 18:59:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.903 18:59:18 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:39:02.903 18:59:18 -- common/autobuild_common.sh@479 -- $ date +%s 00:39:02.903 18:59:18 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734202758.XXXXXX 00:39:02.903 18:59:18 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734202758.Sje9D2 00:39:02.903 18:59:18 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:39:02.903 18:59:18 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:39:02.903 18:59:18 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@495 -- $ get_config_params 00:39:02.903 18:59:18 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:39:02.903 18:59:18 -- common/autotest_common.sh@10 -- $ set +x 00:39:02.903 18:59:18 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:39:02.903 18:59:18 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:39:02.903 18:59:18 -- pm/common@17 -- $ local monitor 00:39:02.903 18:59:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:02.903 18:59:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:02.903 18:59:18 -- pm/common@25 -- $ sleep 1 00:39:02.903 18:59:18 -- pm/common@21 -- $ date +%s 00:39:02.903 18:59:18 -- pm/common@21 -- $ date +%s 00:39:02.903 18:59:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734202758 00:39:02.903 18:59:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734202758 00:39:02.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734202758_collect-cpu-load.pm.log 00:39:02.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734202758_collect-vmstat.pm.log 00:39:03.839 18:59:19 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:39:03.839 18:59:19 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:39:03.839 18:59:19 -- spdk/autopackage.sh@14 -- $ timing_finish 00:39:03.839 18:59:19 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:03.839 18:59:19 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:03.839 18:59:19 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:03.839 18:59:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:03.839 18:59:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:03.839 18:59:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:03.839 18:59:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:03.839 18:59:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:39:03.839 18:59:19 -- pm/common@44 -- $ pid=133583 00:39:03.839 18:59:19 -- pm/common@50 -- $ kill -TERM 133583 00:39:03.839 18:59:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:03.839 18:59:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:39:03.839 18:59:19 -- pm/common@44 -- $ pid=133585 00:39:03.839 18:59:19 -- pm/common@50 -- $ kill -TERM 133585 00:39:03.839 + [[ -n 6000 ]] 00:39:03.839 + sudo kill 6000 00:39:04.106 [Pipeline] } 00:39:04.121 [Pipeline] // timeout 00:39:04.126 [Pipeline] } 00:39:04.139 [Pipeline] // stage 00:39:04.144 [Pipeline] } 00:39:04.158 [Pipeline] // catchError 00:39:04.168 [Pipeline] stage 00:39:04.170 [Pipeline] { (Stop VM) 00:39:04.182 [Pipeline] sh 00:39:04.461 + vagrant halt 00:39:07.748 ==> default: Halting domain... 00:39:13.030 [Pipeline] sh 00:39:13.310 + vagrant destroy -f 00:39:15.843 ==> default: Removing domain... 00:39:16.115 [Pipeline] sh 00:39:16.396 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:39:16.405 [Pipeline] } 00:39:16.421 [Pipeline] // stage 00:39:16.428 [Pipeline] } 00:39:16.442 [Pipeline] // dir 00:39:16.448 [Pipeline] } 00:39:16.462 [Pipeline] // wrap 00:39:16.468 [Pipeline] } 00:39:16.480 [Pipeline] // catchError 00:39:16.490 [Pipeline] stage 00:39:16.492 [Pipeline] { (Epilogue) 00:39:16.504 [Pipeline] sh 00:39:16.786 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:22.068 [Pipeline] catchError 00:39:22.070 [Pipeline] { 00:39:22.083 [Pipeline] sh 00:39:22.393 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:22.650 Artifacts sizes are good 00:39:22.657 [Pipeline] } 00:39:22.668 [Pipeline] // catchError 00:39:22.677 [Pipeline] archiveArtifacts 00:39:22.683 Archiving artifacts 00:39:22.796 [Pipeline] cleanWs 00:39:22.807 [WS-CLEANUP] Deleting project workspace... 00:39:22.807 [WS-CLEANUP] Deferred wipeout is used... 00:39:22.813 [WS-CLEANUP] done 00:39:22.815 [Pipeline] } 00:39:22.831 [Pipeline] // stage 00:39:22.836 [Pipeline] } 00:39:22.851 [Pipeline] // node 00:39:22.856 [Pipeline] End of Pipeline 00:39:22.921 Finished: SUCCESS